00:00:00.001 Started by upstream project "autotest-per-patch" build number 132726 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.032 The recommended git tool is: git 00:00:00.032 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.108 > git --version # 'git version 2.39.2' 00:00:00.108 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.172 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.172 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.349 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.363 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.376 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.376 > git config core.sparsecheckout # timeout=10 00:00:02.391 > git read-tree -mu HEAD # timeout=10 00:00:02.411 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.439 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.440 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.546 [Pipeline] Start of Pipeline 00:00:02.556 [Pipeline] library 00:00:02.557 Loading library shm_lib@master 00:00:02.557 Library shm_lib@master is cached. Copying from home. 00:00:02.570 [Pipeline] node 00:00:02.578 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.579 [Pipeline] { 00:00:02.586 [Pipeline] catchError 00:00:02.587 [Pipeline] { 00:00:02.598 [Pipeline] wrap 00:00:02.607 [Pipeline] { 00:00:02.615 [Pipeline] stage 00:00:02.617 [Pipeline] { (Prologue) 00:00:02.634 [Pipeline] echo 00:00:02.636 Node: VM-host-SM17 00:00:02.642 [Pipeline] cleanWs 00:00:02.651 [WS-CLEANUP] Deleting project workspace... 00:00:02.651 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.657 [WS-CLEANUP] done 00:00:02.875 [Pipeline] setCustomBuildProperty 00:00:02.965 [Pipeline] httpRequest 00:00:03.354 [Pipeline] echo 00:00:03.356 Sorcerer 10.211.164.101 is alive 00:00:03.365 [Pipeline] retry 00:00:03.366 [Pipeline] { 00:00:03.380 [Pipeline] httpRequest 00:00:03.385 HttpMethod: GET 00:00:03.385 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.386 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.392 Response Code: HTTP/1.1 200 OK 00:00:03.392 Success: Status code 200 is in the accepted range: 200,404 00:00:03.393 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.278 [Pipeline] } 00:00:12.296 [Pipeline] // retry 00:00:12.304 [Pipeline] sh 00:00:12.585 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.603 [Pipeline] httpRequest 00:00:13.306 [Pipeline] echo 00:00:13.308 Sorcerer 10.211.164.101 is alive 00:00:13.318 [Pipeline] retry 00:00:13.320 [Pipeline] { 00:00:13.335 [Pipeline] httpRequest 00:00:13.339 HttpMethod: GET 00:00:13.340 URL: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:13.340 Sending request to url: http://10.211.164.101/packages/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:00:13.345 Response Code: HTTP/1.1 200 OK 00:00:13.345 Success: Status code 200 is in the accepted range: 200,404 00:00:13.346 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:07.218 [Pipeline] } 00:06:07.237 [Pipeline] // retry 00:06:07.243 [Pipeline] sh 00:06:07.522 + tar --no-same-owner -xf spdk_cf089b398db10e05fa361e1ed44b582860706d22.tar.gz 00:06:10.846 [Pipeline] sh 00:06:11.131 + git -C spdk log --oneline -n5 00:06:11.131 cf089b398 thread: fd_group-based interrupts 00:06:11.131 8a4656bc1 thread: move interrupt allocation to a function 00:06:11.131 09908f908 util: add method for setting fd_group's wrapper 00:06:11.131 697130caf util: multi-level fd_group nesting 00:06:11.131 6696ebaae util: keep track of nested child fd_groups 00:06:11.150 [Pipeline] writeFile 00:06:11.164 [Pipeline] sh 00:06:11.444 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:11.454 [Pipeline] sh 00:06:11.735 + cat autorun-spdk.conf 00:06:11.735 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:11.735 SPDK_RUN_ASAN=1 00:06:11.735 SPDK_RUN_UBSAN=1 00:06:11.735 SPDK_TEST_RAID=1 00:06:11.735 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:11.754 RUN_NIGHTLY=0 00:06:11.755 [Pipeline] } 00:06:11.770 [Pipeline] // stage 00:06:11.787 [Pipeline] stage 00:06:11.789 [Pipeline] { (Run VM) 00:06:11.800 [Pipeline] sh 00:06:12.079 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:12.079 + echo 'Start stage prepare_nvme.sh' 00:06:12.079 Start stage prepare_nvme.sh 00:06:12.079 + [[ -n 7 ]] 00:06:12.079 + disk_prefix=ex7 00:06:12.079 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:06:12.079 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:06:12.079 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:06:12.079 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:12.079 ++ SPDK_RUN_ASAN=1 00:06:12.079 ++ SPDK_RUN_UBSAN=1 00:06:12.079 ++ SPDK_TEST_RAID=1 00:06:12.079 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:12.079 ++ RUN_NIGHTLY=0 00:06:12.079 + cd /var/jenkins/workspace/raid-vg-autotest 00:06:12.079 + nvme_files=() 00:06:12.079 + declare -A nvme_files 00:06:12.079 + backend_dir=/var/lib/libvirt/images/backends 00:06:12.079 + nvme_files['nvme.img']=5G 00:06:12.079 + nvme_files['nvme-cmb.img']=5G 00:06:12.079 + nvme_files['nvme-multi0.img']=4G 00:06:12.079 + nvme_files['nvme-multi1.img']=4G 00:06:12.079 + nvme_files['nvme-multi2.img']=4G 00:06:12.079 + nvme_files['nvme-openstack.img']=8G 00:06:12.079 + nvme_files['nvme-zns.img']=5G 00:06:12.079 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:12.079 + (( SPDK_TEST_FTL == 1 )) 00:06:12.079 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:12.079 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:06:12.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:12.079 + for nvme in "${!nvme_files[@]}" 00:06:12.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:06:13.013 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:13.013 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:06:13.013 + echo 'End stage prepare_nvme.sh' 00:06:13.013 End stage prepare_nvme.sh 00:06:13.025 [Pipeline] sh 00:06:13.310 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:13.310 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:06:13.310 00:06:13.310 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:06:13.310 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:06:13.310 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:06:13.310 HELP=0 00:06:13.310 DRY_RUN=0 00:06:13.310 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:06:13.310 NVME_DISKS_TYPE=nvme,nvme, 00:06:13.310 NVME_AUTO_CREATE=0 00:06:13.310 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:06:13.310 NVME_CMB=,, 00:06:13.310 NVME_PMR=,, 00:06:13.310 NVME_ZNS=,, 00:06:13.310 NVME_MS=,, 00:06:13.310 NVME_FDP=,, 00:06:13.310 SPDK_VAGRANT_DISTRO=fedora39 00:06:13.310 SPDK_VAGRANT_VMCPU=10 00:06:13.310 SPDK_VAGRANT_VMRAM=12288 00:06:13.310 SPDK_VAGRANT_PROVIDER=libvirt 00:06:13.310 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:13.310 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:13.310 SPDK_OPENSTACK_NETWORK=0 00:06:13.310 VAGRANT_PACKAGE_BOX=0 00:06:13.310 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:06:13.310 FORCE_DISTRO=true 00:06:13.310 VAGRANT_BOX_VERSION= 00:06:13.310 EXTRA_VAGRANTFILES= 00:06:13.310 NIC_MODEL=e1000 00:06:13.310 00:06:13.310 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:06:13.310 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:06:16.593 Bringing machine 'default' up with 'libvirt' provider... 00:06:17.528 ==> default: Creating image (snapshot of base box volume). 00:06:17.528 ==> default: Creating domain with the following settings... 00:06:17.528 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733490023_0fe276e781437686bec2 00:06:17.528 ==> default: -- Domain type: kvm 00:06:17.528 ==> default: -- Cpus: 10 00:06:17.528 ==> default: -- Feature: acpi 00:06:17.528 ==> default: -- Feature: apic 00:06:17.528 ==> default: -- Feature: pae 00:06:17.528 ==> default: -- Memory: 12288M 00:06:17.528 ==> default: -- Memory Backing: hugepages: 00:06:17.528 ==> default: -- Management MAC: 00:06:17.528 ==> default: -- Loader: 00:06:17.528 ==> default: -- Nvram: 00:06:17.528 ==> default: -- Base box: spdk/fedora39 00:06:17.528 ==> default: -- Storage pool: default 00:06:17.528 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733490023_0fe276e781437686bec2.img (20G) 00:06:17.528 ==> default: -- Volume Cache: default 00:06:17.528 ==> default: -- Kernel: 00:06:17.528 ==> default: -- Initrd: 00:06:17.528 ==> default: -- Graphics Type: vnc 00:06:17.528 ==> default: -- Graphics Port: -1 00:06:17.528 ==> default: -- Graphics IP: 127.0.0.1 00:06:17.528 ==> default: -- Graphics Password: Not defined 00:06:17.528 ==> default: -- Video Type: cirrus 00:06:17.528 ==> default: -- Video VRAM: 9216 00:06:17.528 ==> default: -- Sound Type: 00:06:17.528 ==> default: -- Keymap: en-us 00:06:17.528 ==> default: -- TPM Path: 00:06:17.528 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:17.528 ==> default: -- Command line args: 00:06:17.528 ==> default: -> value=-device, 00:06:17.528 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:17.528 ==> default: -> value=-drive, 00:06:17.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:06:17.529 ==> default: -> value=-device, 00:06:17.529 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:17.529 ==> default: -> value=-device, 00:06:17.529 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:17.529 ==> default: -> value=-drive, 00:06:17.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:17.529 ==> default: -> value=-device, 00:06:17.529 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:17.529 ==> default: -> value=-drive, 00:06:17.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:17.529 ==> default: -> value=-device, 00:06:17.529 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:17.529 ==> default: -> value=-drive, 00:06:17.529 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:17.529 ==> default: -> value=-device, 00:06:17.529 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:17.788 ==> default: Creating shared folders metadata... 00:06:17.788 ==> default: Starting domain. 00:06:19.163 ==> default: Waiting for domain to get an IP address... 00:06:41.102 ==> default: Waiting for SSH to become available... 00:06:41.668 ==> default: Configuring and enabling network interfaces... 00:06:45.875 default: SSH address: 192.168.121.83:22 00:06:45.875 default: SSH username: vagrant 00:06:45.875 default: SSH auth method: private key 00:06:48.403 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:56.518 ==> default: Mounting SSHFS shared folder... 00:06:57.893 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:57.893 ==> default: Checking Mount.. 00:06:59.269 ==> default: Folder Successfully Mounted! 00:06:59.269 ==> default: Running provisioner: file... 00:06:59.835 default: ~/.gitconfig => .gitconfig 00:07:00.402 00:07:00.402 SUCCESS! 00:07:00.402 00:07:00.402 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:07:00.402 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:00.402 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:07:00.402 00:07:00.409 [Pipeline] } 00:07:00.424 [Pipeline] // stage 00:07:00.433 [Pipeline] dir 00:07:00.434 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:07:00.435 [Pipeline] { 00:07:00.447 [Pipeline] catchError 00:07:00.448 [Pipeline] { 00:07:00.458 [Pipeline] sh 00:07:00.734 + vagrant ssh-config --host vagrant 00:07:00.734 + sed -ne /^Host/,$p 00:07:00.734 + tee ssh_conf 00:07:04.913 Host vagrant 00:07:04.913 HostName 192.168.121.83 00:07:04.913 User vagrant 00:07:04.913 Port 22 00:07:04.913 UserKnownHostsFile /dev/null 00:07:04.913 StrictHostKeyChecking no 00:07:04.913 PasswordAuthentication no 00:07:04.913 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:07:04.913 IdentitiesOnly yes 00:07:04.913 LogLevel FATAL 00:07:04.913 ForwardAgent yes 00:07:04.913 ForwardX11 yes 00:07:04.913 00:07:04.927 [Pipeline] withEnv 00:07:04.930 [Pipeline] { 00:07:04.946 [Pipeline] sh 00:07:05.222 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:05.222 source /etc/os-release 00:07:05.222 [[ -e /image.version ]] && img=$(< /image.version) 00:07:05.222 # Minimal, systemd-like check. 00:07:05.222 if [[ -e /.dockerenv ]]; then 00:07:05.222 # Clear garbage from the node's name: 00:07:05.222 # agt-er_autotest_547-896 -> autotest_547-896 00:07:05.222 # $HOSTNAME is the actual container id 00:07:05.222 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:05.222 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:07:05.222 # We can assume this is a mount from a host where container is running, 00:07:05.222 # so fetch its hostname to easily identify the target swarm worker. 00:07:05.222 container="$(< /etc/hostname) ($agent)" 00:07:05.222 else 00:07:05.222 # Fallback 00:07:05.222 container=$agent 00:07:05.222 fi 00:07:05.222 fi 00:07:05.223 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:05.223 00:07:05.490 [Pipeline] } 00:07:05.507 [Pipeline] // withEnv 00:07:05.515 [Pipeline] setCustomBuildProperty 00:07:05.531 [Pipeline] stage 00:07:05.533 [Pipeline] { (Tests) 00:07:05.568 [Pipeline] sh 00:07:05.847 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:05.860 [Pipeline] sh 00:07:06.136 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:07:06.153 [Pipeline] timeout 00:07:06.154 Timeout set to expire in 1 hr 30 min 00:07:06.156 [Pipeline] { 00:07:06.170 [Pipeline] sh 00:07:06.448 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:07.014 HEAD is now at cf089b398 thread: fd_group-based interrupts 00:07:07.027 [Pipeline] sh 00:07:07.309 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:07.580 [Pipeline] sh 00:07:07.858 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:08.132 [Pipeline] sh 00:07:08.410 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:07:08.680 ++ readlink -f spdk_repo 00:07:08.680 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:08.680 + [[ -n /home/vagrant/spdk_repo ]] 00:07:08.680 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:08.680 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:08.680 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:08.680 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:08.680 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:08.680 + [[ raid-vg-autotest == pkgdep-* ]] 00:07:08.680 + cd /home/vagrant/spdk_repo 00:07:08.680 + source /etc/os-release 00:07:08.680 ++ NAME='Fedora Linux' 00:07:08.680 ++ VERSION='39 (Cloud Edition)' 00:07:08.680 ++ ID=fedora 00:07:08.680 ++ VERSION_ID=39 00:07:08.680 ++ VERSION_CODENAME= 00:07:08.680 ++ PLATFORM_ID=platform:f39 00:07:08.680 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:08.680 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:08.680 ++ LOGO=fedora-logo-icon 00:07:08.680 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:08.680 ++ HOME_URL=https://fedoraproject.org/ 00:07:08.681 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:08.681 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:08.681 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:08.681 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:08.681 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:08.681 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:08.681 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:08.681 ++ SUPPORT_END=2024-11-12 00:07:08.681 ++ VARIANT='Cloud Edition' 00:07:08.681 ++ VARIANT_ID=cloud 00:07:08.681 + uname -a 00:07:08.681 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:08.681 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:08.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.980 Hugepages 00:07:08.980 node hugesize free / total 00:07:08.980 node0 1048576kB 0 / 0 00:07:08.980 node0 2048kB 0 / 0 00:07:08.980 00:07:08.980 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:09.238 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:09.238 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:09.238 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:09.238 + rm -f /tmp/spdk-ld-path 00:07:09.238 + source autorun-spdk.conf 00:07:09.238 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:09.238 ++ SPDK_RUN_ASAN=1 00:07:09.238 ++ SPDK_RUN_UBSAN=1 00:07:09.238 ++ SPDK_TEST_RAID=1 00:07:09.238 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:09.238 ++ RUN_NIGHTLY=0 00:07:09.238 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:09.238 + [[ -n '' ]] 00:07:09.238 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:09.238 + for M in /var/spdk/build-*-manifest.txt 00:07:09.238 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:09.238 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.238 + for M in /var/spdk/build-*-manifest.txt 00:07:09.238 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:09.238 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.238 + for M in /var/spdk/build-*-manifest.txt 00:07:09.238 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:09.238 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:09.238 ++ uname 00:07:09.238 + [[ Linux == \L\i\n\u\x ]] 00:07:09.238 + sudo dmesg -T 00:07:09.238 + sudo dmesg --clear 00:07:09.238 + dmesg_pid=5210 00:07:09.238 + sudo dmesg -Tw 00:07:09.238 + [[ Fedora Linux == FreeBSD ]] 00:07:09.238 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.238 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.238 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:09.238 + [[ -x /usr/src/fio-static/fio ]] 00:07:09.238 + export FIO_BIN=/usr/src/fio-static/fio 00:07:09.238 + FIO_BIN=/usr/src/fio-static/fio 00:07:09.238 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:09.238 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:09.238 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:09.238 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.238 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.238 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:09.238 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.238 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.238 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.238 13:01:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:09.239 13:01:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:09.239 13:01:15 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:07:09.239 13:01:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:09.239 13:01:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:09.497 13:01:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:09.497 13:01:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.497 13:01:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:09.497 13:01:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:09.497 13:01:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.497 13:01:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.497 13:01:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.497 13:01:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.497 13:01:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.497 13:01:15 -- paths/export.sh@5 -- $ export PATH 00:07:09.497 13:01:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.497 13:01:15 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:09.497 13:01:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:09.497 13:01:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733490075.XXXXXX 00:07:09.497 13:01:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733490075.nV9Mqn 00:07:09.497 13:01:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:09.497 13:01:15 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:09.497 13:01:15 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:09.497 13:01:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:09.497 13:01:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:09.497 13:01:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:09.497 13:01:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:09.497 13:01:15 -- common/autotest_common.sh@10 -- $ set +x 00:07:09.497 13:01:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:07:09.497 13:01:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:09.497 13:01:15 -- pm/common@17 -- $ local monitor 00:07:09.497 13:01:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:09.497 13:01:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:09.497 13:01:15 -- pm/common@25 -- $ sleep 1 00:07:09.497 13:01:15 -- pm/common@21 -- $ date +%s 00:07:09.497 13:01:15 -- pm/common@21 -- $ date +%s 00:07:09.497 13:01:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490075 00:07:09.497 13:01:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490075 00:07:09.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490075_collect-vmstat.pm.log 00:07:09.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490075_collect-cpu-load.pm.log 00:07:10.429 13:01:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:10.429 13:01:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:10.429 13:01:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:10.429 13:01:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:10.429 13:01:16 -- spdk/autobuild.sh@16 -- $ date -u 00:07:10.429 Fri Dec 6 01:01:16 PM UTC 2024 00:07:10.429 13:01:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:10.429 v25.01-pre-308-gcf089b398 00:07:10.429 13:01:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:07:10.429 13:01:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:07:10.429 13:01:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:10.429 13:01:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:10.429 13:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:07:10.429 ************************************ 00:07:10.429 START TEST asan 00:07:10.429 ************************************ 00:07:10.429 using asan 00:07:10.429 13:01:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:07:10.429 00:07:10.429 real 0m0.000s 00:07:10.429 user 0m0.000s 00:07:10.429 sys 0m0.000s 00:07:10.429 13:01:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:10.429 13:01:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:07:10.429 ************************************ 00:07:10.429 END TEST asan 00:07:10.429 ************************************ 00:07:10.429 13:01:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:10.429 13:01:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:10.429 13:01:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:10.429 13:01:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:10.429 13:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:07:10.429 ************************************ 00:07:10.429 START TEST ubsan 00:07:10.429 ************************************ 00:07:10.429 using ubsan 00:07:10.429 13:01:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:10.429 00:07:10.429 real 0m0.000s 00:07:10.429 user 0m0.000s 00:07:10.429 sys 0m0.000s 00:07:10.430 13:01:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:10.430 13:01:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:10.430 ************************************ 00:07:10.430 END TEST ubsan 00:07:10.430 ************************************ 00:07:10.687 13:01:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:10.687 13:01:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:10.687 13:01:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:10.687 13:01:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:07:10.687 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:10.687 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:10.945 Using 'verbs' RDMA provider 00:07:24.507 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:39.411 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:39.411 Creating mk/config.mk...done. 00:07:39.411 Creating mk/cc.flags.mk...done. 00:07:39.411 Type 'make' to build. 00:07:39.411 13:01:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:39.411 13:01:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:39.411 13:01:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:39.411 13:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:07:39.411 ************************************ 00:07:39.411 START TEST make 00:07:39.411 ************************************ 00:07:39.411 13:01:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:39.411 make[1]: Nothing to be done for 'all'. 00:07:54.316 The Meson build system 00:07:54.316 Version: 1.5.0 00:07:54.316 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:54.316 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:54.316 Build type: native build 00:07:54.316 Program cat found: YES (/usr/bin/cat) 00:07:54.316 Project name: DPDK 00:07:54.316 Project version: 24.03.0 00:07:54.316 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:54.316 C linker for the host machine: cc ld.bfd 2.40-14 00:07:54.316 Host machine cpu family: x86_64 00:07:54.316 Host machine cpu: x86_64 00:07:54.316 Message: ## Building in Developer Mode ## 00:07:54.316 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:54.316 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:54.316 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:54.316 Program python3 found: YES (/usr/bin/python3) 00:07:54.316 Program cat found: YES (/usr/bin/cat) 00:07:54.316 Compiler for C supports arguments -march=native: YES 00:07:54.316 Checking for size of "void *" : 8 00:07:54.316 Checking for size of "void *" : 8 (cached) 00:07:54.316 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:54.316 Library m found: YES 00:07:54.316 Library numa found: YES 00:07:54.316 Has header "numaif.h" : YES 00:07:54.316 Library fdt found: NO 00:07:54.316 Library execinfo found: NO 00:07:54.316 Has header "execinfo.h" : YES 00:07:54.316 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:54.316 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:54.316 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:54.316 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:54.316 Run-time dependency openssl found: YES 3.1.1 00:07:54.316 Run-time dependency libpcap found: YES 1.10.4 00:07:54.316 Has header "pcap.h" with dependency libpcap: YES 00:07:54.316 Compiler for C supports arguments -Wcast-qual: YES 00:07:54.316 Compiler for C supports arguments -Wdeprecated: YES 00:07:54.316 Compiler for C supports arguments -Wformat: YES 00:07:54.316 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:54.316 Compiler for C supports arguments -Wformat-security: NO 00:07:54.316 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:54.316 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:54.316 Compiler for C supports arguments -Wnested-externs: YES 00:07:54.316 Compiler for C supports arguments -Wold-style-definition: YES 00:07:54.316 Compiler for C supports arguments -Wpointer-arith: YES 00:07:54.316 Compiler for C supports arguments -Wsign-compare: YES 00:07:54.316 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:54.316 Compiler for C supports arguments -Wundef: YES 00:07:54.316 Compiler for C supports arguments -Wwrite-strings: YES 00:07:54.316 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:54.316 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:54.316 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:54.316 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:54.316 Program objdump found: YES (/usr/bin/objdump) 00:07:54.316 Compiler for C supports arguments -mavx512f: YES 00:07:54.316 Checking if "AVX512 checking" compiles: YES 00:07:54.316 Fetching value of define "__SSE4_2__" : 1 00:07:54.316 Fetching value of define "__AES__" : 1 00:07:54.316 Fetching value of define "__AVX__" : 1 00:07:54.316 Fetching value of define "__AVX2__" : 1 00:07:54.316 Fetching value of define "__AVX512BW__" : (undefined) 00:07:54.316 Fetching value of define "__AVX512CD__" : (undefined) 00:07:54.316 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:54.316 Fetching value of define "__AVX512F__" : (undefined) 00:07:54.316 Fetching value of define "__AVX512VL__" : (undefined) 00:07:54.316 Fetching value of define "__PCLMUL__" : 1 00:07:54.316 Fetching value of define "__RDRND__" : 1 00:07:54.316 Fetching value of define "__RDSEED__" : 1 00:07:54.316 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:54.316 Fetching value of define "__znver1__" : (undefined) 00:07:54.316 Fetching value of define "__znver2__" : (undefined) 00:07:54.316 Fetching value of define "__znver3__" : (undefined) 00:07:54.316 Fetching value of define "__znver4__" : (undefined) 00:07:54.317 Library asan found: YES 00:07:54.317 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:54.317 Message: lib/log: Defining dependency "log" 00:07:54.317 Message: lib/kvargs: Defining dependency "kvargs" 00:07:54.317 Message: lib/telemetry: Defining dependency "telemetry" 00:07:54.317 Library rt found: YES 00:07:54.317 Checking for function "getentropy" : NO 00:07:54.317 Message: lib/eal: Defining dependency "eal" 00:07:54.317 Message: lib/ring: Defining dependency "ring" 00:07:54.317 Message: lib/rcu: Defining dependency "rcu" 00:07:54.317 Message: lib/mempool: Defining dependency "mempool" 00:07:54.317 Message: lib/mbuf: Defining dependency "mbuf" 00:07:54.317 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:54.317 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:54.317 Compiler for C supports arguments -mpclmul: YES 00:07:54.317 Compiler for C supports arguments -maes: YES 00:07:54.317 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:54.317 Compiler for C supports arguments -mavx512bw: YES 00:07:54.317 Compiler for C supports arguments -mavx512dq: YES 00:07:54.317 Compiler for C supports arguments -mavx512vl: YES 00:07:54.317 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:54.317 Compiler for C supports arguments -mavx2: YES 00:07:54.317 Compiler for C supports arguments -mavx: YES 00:07:54.317 Message: lib/net: Defining dependency "net" 00:07:54.317 Message: lib/meter: Defining dependency "meter" 00:07:54.317 Message: lib/ethdev: Defining dependency "ethdev" 00:07:54.317 Message: lib/pci: Defining dependency "pci" 00:07:54.317 Message: lib/cmdline: Defining dependency "cmdline" 00:07:54.317 Message: lib/hash: Defining dependency "hash" 00:07:54.317 Message: lib/timer: Defining dependency "timer" 00:07:54.317 Message: lib/compressdev: Defining dependency "compressdev" 00:07:54.317 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:54.317 Message: lib/dmadev: Defining dependency "dmadev" 00:07:54.317 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:54.317 Message: lib/power: Defining dependency "power" 00:07:54.317 Message: lib/reorder: Defining dependency "reorder" 00:07:54.317 Message: lib/security: Defining dependency "security" 00:07:54.317 Has header "linux/userfaultfd.h" : YES 00:07:54.317 Has header "linux/vduse.h" : YES 00:07:54.317 Message: lib/vhost: Defining dependency "vhost" 00:07:54.317 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:54.317 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:54.317 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:54.317 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:54.317 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:54.317 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:54.317 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:54.317 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:54.317 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:54.317 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:54.317 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:54.317 Configuring doxy-api-html.conf using configuration 00:07:54.317 Configuring doxy-api-man.conf using configuration 00:07:54.317 Program mandb found: YES (/usr/bin/mandb) 00:07:54.317 Program sphinx-build found: NO 00:07:54.317 Configuring rte_build_config.h using configuration 00:07:54.317 Message: 00:07:54.317 ================= 00:07:54.317 Applications Enabled 00:07:54.317 ================= 00:07:54.317 00:07:54.317 apps: 00:07:54.317 00:07:54.317 00:07:54.317 Message: 00:07:54.317 ================= 00:07:54.317 Libraries Enabled 00:07:54.317 ================= 00:07:54.317 00:07:54.317 libs: 00:07:54.317 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:54.317 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:54.317 cryptodev, dmadev, power, reorder, security, vhost, 00:07:54.317 00:07:54.317 Message: 00:07:54.317 =============== 00:07:54.317 Drivers Enabled 00:07:54.317 =============== 00:07:54.317 00:07:54.317 common: 00:07:54.317 00:07:54.317 bus: 00:07:54.317 pci, vdev, 00:07:54.317 mempool: 00:07:54.317 ring, 00:07:54.317 dma: 00:07:54.317 00:07:54.317 net: 00:07:54.317 00:07:54.317 crypto: 00:07:54.317 00:07:54.317 compress: 00:07:54.317 00:07:54.317 vdpa: 00:07:54.317 00:07:54.317 00:07:54.317 Message: 00:07:54.317 ================= 00:07:54.317 Content Skipped 00:07:54.317 ================= 00:07:54.317 00:07:54.317 apps: 00:07:54.317 dumpcap: explicitly disabled via build config 00:07:54.317 graph: explicitly disabled via build config 00:07:54.317 pdump: explicitly disabled via build config 00:07:54.317 proc-info: explicitly disabled via build config 00:07:54.317 test-acl: explicitly disabled via build config 00:07:54.317 test-bbdev: explicitly disabled via build config 00:07:54.317 test-cmdline: explicitly disabled via build config 00:07:54.317 test-compress-perf: explicitly disabled via build config 00:07:54.317 test-crypto-perf: explicitly disabled via build config 00:07:54.317 test-dma-perf: explicitly disabled via build config 00:07:54.317 test-eventdev: explicitly disabled via build config 00:07:54.317 test-fib: explicitly disabled via build config 00:07:54.317 test-flow-perf: explicitly disabled via build config 00:07:54.317 test-gpudev: explicitly disabled via build config 00:07:54.317 test-mldev: explicitly disabled via build config 00:07:54.317 test-pipeline: explicitly disabled via build config 00:07:54.317 test-pmd: explicitly disabled via build config 00:07:54.317 test-regex: explicitly disabled via build config 00:07:54.317 test-sad: explicitly disabled via build config 00:07:54.317 test-security-perf: explicitly disabled via build config 00:07:54.317 00:07:54.317 libs: 00:07:54.317 argparse: explicitly disabled via build config 00:07:54.317 metrics: explicitly disabled via build config 00:07:54.317 acl: explicitly disabled via build config 00:07:54.317 bbdev: explicitly disabled via build config 00:07:54.317 bitratestats: explicitly disabled via build config 00:07:54.317 bpf: explicitly disabled via build config 00:07:54.317 cfgfile: explicitly disabled via build config 00:07:54.317 distributor: explicitly disabled via build config 00:07:54.317 efd: explicitly disabled via build config 00:07:54.317 eventdev: explicitly disabled via build config 00:07:54.317 dispatcher: explicitly disabled via build config 00:07:54.317 gpudev: explicitly disabled via build config 00:07:54.317 gro: explicitly disabled via build config 00:07:54.317 gso: explicitly disabled via build config 00:07:54.317 ip_frag: explicitly disabled via build config 00:07:54.318 jobstats: explicitly disabled via build config 00:07:54.318 latencystats: explicitly disabled via build config 00:07:54.318 lpm: explicitly disabled via build config 00:07:54.318 member: explicitly disabled via build config 00:07:54.318 pcapng: explicitly disabled via build config 00:07:54.318 rawdev: explicitly disabled via build config 00:07:54.318 regexdev: explicitly disabled via build config 00:07:54.318 mldev: explicitly disabled via build config 00:07:54.318 rib: explicitly disabled via build config 00:07:54.318 sched: explicitly disabled via build config 00:07:54.318 stack: explicitly disabled via build config 00:07:54.318 ipsec: explicitly disabled via build config 00:07:54.318 pdcp: explicitly disabled via build config 00:07:54.318 fib: explicitly disabled via build config 00:07:54.318 port: explicitly disabled via build config 00:07:54.318 pdump: explicitly disabled via build config 00:07:54.318 table: explicitly disabled via build config 00:07:54.318 pipeline: explicitly disabled via build config 00:07:54.318 graph: explicitly disabled via build config 00:07:54.318 node: explicitly disabled via build config 00:07:54.318 00:07:54.318 drivers: 00:07:54.318 common/cpt: not in enabled drivers build config 00:07:54.318 common/dpaax: not in enabled drivers build config 00:07:54.318 common/iavf: not in enabled drivers build config 00:07:54.318 common/idpf: not in enabled drivers build config 00:07:54.318 common/ionic: not in enabled drivers build config 00:07:54.318 common/mvep: not in enabled drivers build config 00:07:54.318 common/octeontx: not in enabled drivers build config 00:07:54.318 bus/auxiliary: not in enabled drivers build config 00:07:54.318 bus/cdx: not in enabled drivers build config 00:07:54.318 bus/dpaa: not in enabled drivers build config 00:07:54.318 bus/fslmc: not in enabled drivers build config 00:07:54.318 bus/ifpga: not in enabled drivers build config 00:07:54.318 bus/platform: not in enabled drivers build config 00:07:54.318 bus/uacce: not in enabled drivers build config 00:07:54.318 bus/vmbus: not in enabled drivers build config 00:07:54.318 common/cnxk: not in enabled drivers build config 00:07:54.318 common/mlx5: not in enabled drivers build config 00:07:54.318 common/nfp: not in enabled drivers build config 00:07:54.318 common/nitrox: not in enabled drivers build config 00:07:54.318 common/qat: not in enabled drivers build config 00:07:54.318 common/sfc_efx: not in enabled drivers build config 00:07:54.318 mempool/bucket: not in enabled drivers build config 00:07:54.318 mempool/cnxk: not in enabled drivers build config 00:07:54.318 mempool/dpaa: not in enabled drivers build config 00:07:54.318 mempool/dpaa2: not in enabled drivers build config 00:07:54.318 mempool/octeontx: not in enabled drivers build config 00:07:54.318 mempool/stack: not in enabled drivers build config 00:07:54.318 dma/cnxk: not in enabled drivers build config 00:07:54.318 dma/dpaa: not in enabled drivers build config 00:07:54.318 dma/dpaa2: not in enabled drivers build config 00:07:54.318 dma/hisilicon: not in enabled drivers build config 00:07:54.318 dma/idxd: not in enabled drivers build config 00:07:54.318 dma/ioat: not in enabled drivers build config 00:07:54.318 dma/skeleton: not in enabled drivers build config 00:07:54.318 net/af_packet: not in enabled drivers build config 00:07:54.318 net/af_xdp: not in enabled drivers build config 00:07:54.318 net/ark: not in enabled drivers build config 00:07:54.318 net/atlantic: not in enabled drivers build config 00:07:54.318 net/avp: not in enabled drivers build config 00:07:54.318 net/axgbe: not in enabled drivers build config 00:07:54.318 net/bnx2x: not in enabled drivers build config 00:07:54.318 net/bnxt: not in enabled drivers build config 00:07:54.318 net/bonding: not in enabled drivers build config 00:07:54.318 net/cnxk: not in enabled drivers build config 00:07:54.318 net/cpfl: not in enabled drivers build config 00:07:54.318 net/cxgbe: not in enabled drivers build config 00:07:54.318 net/dpaa: not in enabled drivers build config 00:07:54.318 net/dpaa2: not in enabled drivers build config 00:07:54.318 net/e1000: not in enabled drivers build config 00:07:54.318 net/ena: not in enabled drivers build config 00:07:54.318 net/enetc: not in enabled drivers build config 00:07:54.318 net/enetfec: not in enabled drivers build config 00:07:54.318 net/enic: not in enabled drivers build config 00:07:54.318 net/failsafe: not in enabled drivers build config 00:07:54.318 net/fm10k: not in enabled drivers build config 00:07:54.318 net/gve: not in enabled drivers build config 00:07:54.318 net/hinic: not in enabled drivers build config 00:07:54.318 net/hns3: not in enabled drivers build config 00:07:54.318 net/i40e: not in enabled drivers build config 00:07:54.318 net/iavf: not in enabled drivers build config 00:07:54.318 net/ice: not in enabled drivers build config 00:07:54.318 net/idpf: not in enabled drivers build config 00:07:54.318 net/igc: not in enabled drivers build config 00:07:54.318 net/ionic: not in enabled drivers build config 00:07:54.318 net/ipn3ke: not in enabled drivers build config 00:07:54.318 net/ixgbe: not in enabled drivers build config 00:07:54.318 net/mana: not in enabled drivers build config 00:07:54.318 net/memif: not in enabled drivers build config 00:07:54.318 net/mlx4: not in enabled drivers build config 00:07:54.318 net/mlx5: not in enabled drivers build config 00:07:54.318 net/mvneta: not in enabled drivers build config 00:07:54.318 net/mvpp2: not in enabled drivers build config 00:07:54.318 net/netvsc: not in enabled drivers build config 00:07:54.318 net/nfb: not in enabled drivers build config 00:07:54.318 net/nfp: not in enabled drivers build config 00:07:54.318 net/ngbe: not in enabled drivers build config 00:07:54.318 net/null: not in enabled drivers build config 00:07:54.318 net/octeontx: not in enabled drivers build config 00:07:54.318 net/octeon_ep: not in enabled drivers build config 00:07:54.318 net/pcap: not in enabled drivers build config 00:07:54.318 net/pfe: not in enabled drivers build config 00:07:54.318 net/qede: not in enabled drivers build config 00:07:54.318 net/ring: not in enabled drivers build config 00:07:54.318 net/sfc: not in enabled drivers build config 00:07:54.318 net/softnic: not in enabled drivers build config 00:07:54.318 net/tap: not in enabled drivers build config 00:07:54.318 net/thunderx: not in enabled drivers build config 00:07:54.318 net/txgbe: not in enabled drivers build config 00:07:54.318 net/vdev_netvsc: not in enabled drivers build config 00:07:54.318 net/vhost: not in enabled drivers build config 00:07:54.318 net/virtio: not in enabled drivers build config 00:07:54.318 net/vmxnet3: not in enabled drivers build config 00:07:54.318 raw/*: missing internal dependency, "rawdev" 00:07:54.318 crypto/armv8: not in enabled drivers build config 00:07:54.318 crypto/bcmfs: not in enabled drivers build config 00:07:54.318 crypto/caam_jr: not in enabled drivers build config 00:07:54.318 crypto/ccp: not in enabled drivers build config 00:07:54.318 crypto/cnxk: not in enabled drivers build config 00:07:54.318 crypto/dpaa_sec: not in enabled drivers build config 00:07:54.318 crypto/dpaa2_sec: not in enabled drivers build config 00:07:54.318 crypto/ipsec_mb: not in enabled drivers build config 00:07:54.318 crypto/mlx5: not in enabled drivers build config 00:07:54.318 crypto/mvsam: not in enabled drivers build config 00:07:54.319 crypto/nitrox: not in enabled drivers build config 00:07:54.319 crypto/null: not in enabled drivers build config 00:07:54.319 crypto/octeontx: not in enabled drivers build config 00:07:54.319 crypto/openssl: not in enabled drivers build config 00:07:54.319 crypto/scheduler: not in enabled drivers build config 00:07:54.319 crypto/uadk: not in enabled drivers build config 00:07:54.319 crypto/virtio: not in enabled drivers build config 00:07:54.319 compress/isal: not in enabled drivers build config 00:07:54.319 compress/mlx5: not in enabled drivers build config 00:07:54.319 compress/nitrox: not in enabled drivers build config 00:07:54.319 compress/octeontx: not in enabled drivers build config 00:07:54.319 compress/zlib: not in enabled drivers build config 00:07:54.319 regex/*: missing internal dependency, "regexdev" 00:07:54.319 ml/*: missing internal dependency, "mldev" 00:07:54.319 vdpa/ifc: not in enabled drivers build config 00:07:54.319 vdpa/mlx5: not in enabled drivers build config 00:07:54.319 vdpa/nfp: not in enabled drivers build config 00:07:54.319 vdpa/sfc: not in enabled drivers build config 00:07:54.319 event/*: missing internal dependency, "eventdev" 00:07:54.319 baseband/*: missing internal dependency, "bbdev" 00:07:54.319 gpu/*: missing internal dependency, "gpudev" 00:07:54.319 00:07:54.319 00:07:54.319 Build targets in project: 85 00:07:54.319 00:07:54.319 DPDK 24.03.0 00:07:54.319 00:07:54.319 User defined options 00:07:54.319 buildtype : debug 00:07:54.319 default_library : shared 00:07:54.319 libdir : lib 00:07:54.319 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:54.319 b_sanitize : address 00:07:54.319 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:54.319 c_link_args : 00:07:54.319 cpu_instruction_set: native 00:07:54.319 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:54.319 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:54.319 enable_docs : false 00:07:54.319 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:54.319 enable_kmods : false 00:07:54.319 max_lcores : 128 00:07:54.319 tests : false 00:07:54.319 00:07:54.319 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:54.319 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:54.319 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:54.319 [2/268] Linking static target lib/librte_kvargs.a 00:07:54.319 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:54.319 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:54.319 [5/268] Linking static target lib/librte_log.a 00:07:54.319 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:54.319 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:54.319 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:54.319 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.319 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:54.319 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:54.319 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:54.319 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:54.319 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:54.319 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:54.319 [16/268] Linking static target lib/librte_telemetry.a 00:07:54.319 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.578 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:54.578 [19/268] Linking target lib/librte_log.so.24.1 00:07:54.578 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:54.836 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:54.836 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:55.095 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:55.095 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:55.095 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:55.095 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:55.375 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:55.375 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:55.375 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:55.375 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:55.634 [31/268] Linking target lib/librte_telemetry.so.24.1 00:07:55.634 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:55.893 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:55.893 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:55.893 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:56.256 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:56.256 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:56.256 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:56.514 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:56.515 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:56.773 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:56.773 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:56.773 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:56.773 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:56.773 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:57.031 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:57.289 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:57.289 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:57.289 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:57.548 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:57.548 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:57.548 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:57.805 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:58.062 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:58.062 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:58.318 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:58.318 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:58.318 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:58.576 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:58.576 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:58.576 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:58.576 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:58.834 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:58.834 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:58.834 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:58.834 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:59.397 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:59.397 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:59.397 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:59.654 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:59.654 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:59.654 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:59.654 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:59.911 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:59.911 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:59.911 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:59.911 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:59.911 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:59.911 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:00.169 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:00.169 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:00.426 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:00.426 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:00.426 [84/268] Linking static target lib/librte_ring.a 00:08:00.426 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:00.684 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:00.684 [87/268] Linking static target lib/librte_eal.a 00:08:00.684 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:00.943 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:01.211 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:01.211 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:01.211 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:01.468 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:01.468 [94/268] Linking static target lib/librte_mempool.a 00:08:01.468 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:01.468 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:01.468 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:01.468 [98/268] Linking static target lib/librte_rcu.a 00:08:01.725 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:01.725 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:01.983 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:02.549 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:02.549 [103/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:02.549 [104/268] Linking static target lib/librte_meter.a 00:08:02.549 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:02.549 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:02.807 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:02.807 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:02.807 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:02.807 [110/268] Linking static target lib/librte_net.a 00:08:02.807 [111/268] Linking static target lib/librte_mbuf.a 00:08:03.065 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.065 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.324 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:03.582 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.582 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:03.582 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:04.147 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:04.147 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:04.715 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:04.715 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:04.715 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:04.715 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:05.284 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:05.284 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:05.284 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:05.543 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:05.543 [128/268] Linking static target lib/librte_pci.a 00:08:05.543 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:05.543 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:05.543 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:05.543 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:05.543 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:05.802 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:05.802 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:05.802 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:05.802 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:05.802 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:06.062 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:06.062 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.062 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:06.062 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:06.062 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:06.062 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:06.326 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:06.586 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:06.586 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:06.586 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:06.586 [149/268] Linking static target lib/librte_cmdline.a 00:08:06.846 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:06.846 [151/268] Linking static target lib/librte_ethdev.a 00:08:07.105 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:07.105 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:07.105 [154/268] Linking static target lib/librte_timer.a 00:08:07.364 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:07.622 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:07.881 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:07.881 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.483 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:08.483 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:08.483 [161/268] Linking static target lib/librte_compressdev.a 00:08:08.483 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:08.741 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:08.998 [164/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:08.998 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:09.257 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:09.257 [167/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:09.257 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:09.257 [169/268] Linking static target lib/librte_dmadev.a 00:08:09.515 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:09.515 [171/268] Linking static target lib/librte_hash.a 00:08:09.773 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:09.773 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:09.773 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:10.339 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:10.339 [176/268] Linking static target lib/librte_cryptodev.a 00:08:10.339 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:10.598 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:10.598 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:10.598 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:10.856 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:10.856 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:10.856 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:11.113 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.113 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:11.113 [186/268] Linking static target lib/librte_power.a 00:08:11.712 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:11.712 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:11.970 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:11.970 [190/268] Linking static target lib/librte_reorder.a 00:08:11.970 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:12.535 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:12.535 [193/268] Linking static target lib/librte_security.a 00:08:12.794 [194/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.052 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.313 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:13.572 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:13.572 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.572 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:13.572 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:13.830 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:14.395 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:14.395 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:14.395 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:14.657 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:14.657 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:14.928 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:14.928 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:14.928 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:15.187 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:15.187 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:15.187 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:15.187 [213/268] Linking static target drivers/librte_bus_pci.a 00:08:15.445 [214/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:15.445 [215/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:15.704 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:15.704 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:15.704 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:15.704 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:15.704 [220/268] Linking static target drivers/librte_bus_vdev.a 00:08:15.704 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:15.963 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:15.963 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:15.963 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:15.963 [225/268] Linking static target drivers/librte_mempool_ring.a 00:08:16.221 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.221 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.479 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.738 [229/268] Linking target lib/librte_eal.so.24.1 00:08:16.738 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:16.997 [231/268] Linking target lib/librte_meter.so.24.1 00:08:16.997 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:16.997 [233/268] Linking target lib/librte_timer.so.24.1 00:08:16.997 [234/268] Linking target lib/librte_dmadev.so.24.1 00:08:16.997 [235/268] Linking target lib/librte_ring.so.24.1 00:08:16.997 [236/268] Linking target lib/librte_pci.so.24.1 00:08:17.256 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:17.257 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:17.257 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:17.258 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:17.258 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:17.258 [242/268] Linking target lib/librte_rcu.so.24.1 00:08:17.258 [243/268] Linking target lib/librte_mempool.so.24.1 00:08:17.258 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:17.518 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:17.518 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:17.518 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:17.518 [248/268] Linking target lib/librte_mbuf.so.24.1 00:08:17.776 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:18.045 [250/268] Linking target lib/librte_compressdev.so.24.1 00:08:18.045 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:08:18.045 [252/268] Linking target lib/librte_reorder.so.24.1 00:08:18.045 [253/268] Linking target lib/librte_net.so.24.1 00:08:18.045 [254/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:18.045 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:18.045 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:18.303 [257/268] Linking target lib/librte_security.so.24.1 00:08:18.303 [258/268] Linking target lib/librte_cmdline.so.24.1 00:08:18.303 [259/268] Linking target lib/librte_hash.so.24.1 00:08:18.303 [260/268] Linking target lib/librte_ethdev.so.24.1 00:08:18.303 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:18.303 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:18.562 [263/268] Linking target lib/librte_power.so.24.1 00:08:18.562 [264/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:25.139 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:25.139 [266/268] Linking static target lib/librte_vhost.a 00:08:26.095 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.352 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:26.352 INFO: autodetecting backend as ninja 00:08:26.352 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:52.911 CC lib/ut_mock/mock.o 00:08:52.911 CC lib/log/log.o 00:08:52.911 CC lib/log/log_deprecated.o 00:08:52.911 CC lib/log/log_flags.o 00:08:52.911 CC lib/ut/ut.o 00:08:52.911 LIB libspdk_ut.a 00:08:52.911 SO libspdk_ut.so.2.0 00:08:52.911 LIB libspdk_log.a 00:08:52.911 LIB libspdk_ut_mock.a 00:08:52.911 SO libspdk_ut_mock.so.6.0 00:08:52.911 SYMLINK libspdk_ut.so 00:08:52.911 SO libspdk_log.so.7.1 00:08:52.911 SYMLINK libspdk_ut_mock.so 00:08:52.911 SYMLINK libspdk_log.so 00:08:52.911 CC lib/dma/dma.o 00:08:52.911 CC lib/ioat/ioat.o 00:08:52.911 CC lib/util/base64.o 00:08:52.911 CC lib/util/bit_array.o 00:08:52.911 CC lib/util/cpuset.o 00:08:52.911 CXX lib/trace_parser/trace.o 00:08:52.911 CC lib/util/crc32.o 00:08:52.911 CC lib/util/crc16.o 00:08:52.911 CC lib/util/crc32c.o 00:08:52.911 CC lib/vfio_user/host/vfio_user_pci.o 00:08:52.911 CC lib/vfio_user/host/vfio_user.o 00:08:52.911 CC lib/util/crc32_ieee.o 00:08:52.911 CC lib/util/crc64.o 00:08:52.911 CC lib/util/dif.o 00:08:52.911 LIB libspdk_dma.a 00:08:52.911 CC lib/util/fd.o 00:08:52.911 CC lib/util/fd_group.o 00:08:52.911 SO libspdk_dma.so.5.0 00:08:52.911 CC lib/util/file.o 00:08:52.911 LIB libspdk_ioat.a 00:08:52.911 SYMLINK libspdk_dma.so 00:08:52.911 CC lib/util/hexlify.o 00:08:52.911 SO libspdk_ioat.so.7.0 00:08:52.911 CC lib/util/iov.o 00:08:52.911 CC lib/util/math.o 00:08:52.911 CC lib/util/net.o 00:08:52.911 SYMLINK libspdk_ioat.so 00:08:52.911 LIB libspdk_vfio_user.a 00:08:52.911 CC lib/util/pipe.o 00:08:52.911 SO libspdk_vfio_user.so.5.0 00:08:52.911 CC lib/util/strerror_tls.o 00:08:52.911 CC lib/util/string.o 00:08:52.911 SYMLINK libspdk_vfio_user.so 00:08:52.911 CC lib/util/uuid.o 00:08:52.911 CC lib/util/xor.o 00:08:52.911 CC lib/util/zipf.o 00:08:52.911 CC lib/util/md5.o 00:08:52.911 LIB libspdk_util.a 00:08:52.911 SO libspdk_util.so.10.1 00:08:52.911 LIB libspdk_trace_parser.a 00:08:52.911 SYMLINK libspdk_util.so 00:08:52.911 SO libspdk_trace_parser.so.6.0 00:08:52.911 SYMLINK libspdk_trace_parser.so 00:08:52.911 CC lib/env_dpdk/env.o 00:08:52.911 CC lib/env_dpdk/pci.o 00:08:52.911 CC lib/env_dpdk/init.o 00:08:52.911 CC lib/env_dpdk/memory.o 00:08:52.911 CC lib/rdma_utils/rdma_utils.o 00:08:52.911 CC lib/idxd/idxd.o 00:08:52.911 CC lib/idxd/idxd_user.o 00:08:52.911 CC lib/vmd/vmd.o 00:08:52.911 CC lib/conf/conf.o 00:08:52.911 CC lib/json/json_parse.o 00:08:53.170 CC lib/json/json_util.o 00:08:53.170 LIB libspdk_rdma_utils.a 00:08:53.170 CC lib/json/json_write.o 00:08:53.170 LIB libspdk_conf.a 00:08:53.170 SO libspdk_rdma_utils.so.1.0 00:08:53.170 SO libspdk_conf.so.6.0 00:08:53.428 SYMLINK libspdk_rdma_utils.so 00:08:53.428 SYMLINK libspdk_conf.so 00:08:53.428 CC lib/env_dpdk/threads.o 00:08:53.428 CC lib/env_dpdk/pci_ioat.o 00:08:53.428 CC lib/env_dpdk/pci_virtio.o 00:08:53.428 CC lib/env_dpdk/pci_vmd.o 00:08:53.428 CC lib/env_dpdk/pci_idxd.o 00:08:53.428 CC lib/idxd/idxd_kernel.o 00:08:53.428 CC lib/env_dpdk/pci_event.o 00:08:53.428 CC lib/vmd/led.o 00:08:53.687 LIB libspdk_json.a 00:08:53.687 CC lib/env_dpdk/sigbus_handler.o 00:08:53.687 SO libspdk_json.so.6.0 00:08:53.687 CC lib/env_dpdk/pci_dpdk.o 00:08:53.687 SYMLINK libspdk_json.so 00:08:53.687 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:53.687 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:53.687 LIB libspdk_idxd.a 00:08:53.687 SO libspdk_idxd.so.12.1 00:08:53.687 LIB libspdk_vmd.a 00:08:53.946 SO libspdk_vmd.so.6.0 00:08:53.946 SYMLINK libspdk_idxd.so 00:08:53.946 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:53.946 CC lib/rdma_provider/common.o 00:08:53.946 SYMLINK libspdk_vmd.so 00:08:53.946 CC lib/jsonrpc/jsonrpc_client.o 00:08:53.946 CC lib/jsonrpc/jsonrpc_server.o 00:08:53.946 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:53.946 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:54.204 LIB libspdk_rdma_provider.a 00:08:54.205 SO libspdk_rdma_provider.so.7.0 00:08:54.205 SYMLINK libspdk_rdma_provider.so 00:08:54.205 LIB libspdk_jsonrpc.a 00:08:54.205 SO libspdk_jsonrpc.so.6.0 00:08:54.464 SYMLINK libspdk_jsonrpc.so 00:08:54.722 CC lib/rpc/rpc.o 00:08:54.722 LIB libspdk_env_dpdk.a 00:08:54.981 SO libspdk_env_dpdk.so.15.1 00:08:54.981 LIB libspdk_rpc.a 00:08:54.981 SO libspdk_rpc.so.6.0 00:08:54.982 SYMLINK libspdk_env_dpdk.so 00:08:55.258 SYMLINK libspdk_rpc.so 00:08:55.258 CC lib/trace/trace.o 00:08:55.258 CC lib/trace/trace_flags.o 00:08:55.258 CC lib/trace/trace_rpc.o 00:08:55.258 CC lib/keyring/keyring.o 00:08:55.258 CC lib/keyring/keyring_rpc.o 00:08:55.258 CC lib/notify/notify_rpc.o 00:08:55.258 CC lib/notify/notify.o 00:08:55.516 LIB libspdk_notify.a 00:08:55.516 SO libspdk_notify.so.6.0 00:08:55.775 LIB libspdk_trace.a 00:08:55.775 SYMLINK libspdk_notify.so 00:08:55.775 LIB libspdk_keyring.a 00:08:55.775 SO libspdk_trace.so.11.0 00:08:55.775 SO libspdk_keyring.so.2.0 00:08:55.775 SYMLINK libspdk_trace.so 00:08:55.775 SYMLINK libspdk_keyring.so 00:08:56.034 CC lib/thread/iobuf.o 00:08:56.034 CC lib/thread/thread.o 00:08:56.034 CC lib/sock/sock.o 00:08:56.034 CC lib/sock/sock_rpc.o 00:08:56.618 LIB libspdk_sock.a 00:08:56.618 SO libspdk_sock.so.10.0 00:08:56.618 SYMLINK libspdk_sock.so 00:08:57.184 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:57.184 CC lib/nvme/nvme_ctrlr.o 00:08:57.184 CC lib/nvme/nvme_fabric.o 00:08:57.184 CC lib/nvme/nvme_ns.o 00:08:57.184 CC lib/nvme/nvme_pcie_common.o 00:08:57.184 CC lib/nvme/nvme_ns_cmd.o 00:08:57.184 CC lib/nvme/nvme_pcie.o 00:08:57.184 CC lib/nvme/nvme_qpair.o 00:08:57.184 CC lib/nvme/nvme.o 00:08:58.121 CC lib/nvme/nvme_quirks.o 00:08:58.121 CC lib/nvme/nvme_transport.o 00:08:58.121 CC lib/nvme/nvme_discovery.o 00:08:58.121 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:58.121 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:58.121 CC lib/nvme/nvme_tcp.o 00:08:58.380 CC lib/nvme/nvme_opal.o 00:08:58.380 LIB libspdk_thread.a 00:08:58.380 SO libspdk_thread.so.11.0 00:08:58.639 CC lib/nvme/nvme_io_msg.o 00:08:58.639 CC lib/nvme/nvme_poll_group.o 00:08:58.639 SYMLINK libspdk_thread.so 00:08:58.639 CC lib/nvme/nvme_zns.o 00:08:58.639 CC lib/nvme/nvme_stubs.o 00:08:58.639 CC lib/nvme/nvme_auth.o 00:08:58.897 CC lib/nvme/nvme_cuse.o 00:08:58.897 CC lib/nvme/nvme_rdma.o 00:08:59.155 CC lib/accel/accel.o 00:08:59.155 CC lib/accel/accel_rpc.o 00:08:59.414 CC lib/accel/accel_sw.o 00:08:59.414 CC lib/blob/blobstore.o 00:08:59.671 CC lib/init/json_config.o 00:08:59.671 CC lib/virtio/virtio.o 00:08:59.671 CC lib/virtio/virtio_vhost_user.o 00:08:59.929 CC lib/init/subsystem.o 00:08:59.929 CC lib/virtio/virtio_vfio_user.o 00:08:59.929 CC lib/blob/request.o 00:09:00.187 CC lib/blob/zeroes.o 00:09:00.187 CC lib/blob/blob_bs_dev.o 00:09:00.187 CC lib/init/subsystem_rpc.o 00:09:00.187 CC lib/virtio/virtio_pci.o 00:09:00.187 CC lib/init/rpc.o 00:09:00.481 CC lib/fsdev/fsdev.o 00:09:00.481 CC lib/fsdev/fsdev_io.o 00:09:00.481 CC lib/fsdev/fsdev_rpc.o 00:09:00.481 LIB libspdk_init.a 00:09:00.481 SO libspdk_init.so.6.0 00:09:00.481 SYMLINK libspdk_init.so 00:09:00.481 LIB libspdk_virtio.a 00:09:00.765 SO libspdk_virtio.so.7.0 00:09:00.765 LIB libspdk_accel.a 00:09:00.765 SYMLINK libspdk_virtio.so 00:09:00.765 SO libspdk_accel.so.16.0 00:09:00.765 LIB libspdk_nvme.a 00:09:00.765 CC lib/event/reactor.o 00:09:00.765 CC lib/event/app.o 00:09:00.765 CC lib/event/log_rpc.o 00:09:00.765 CC lib/event/scheduler_static.o 00:09:00.765 CC lib/event/app_rpc.o 00:09:00.765 SYMLINK libspdk_accel.so 00:09:01.023 SO libspdk_nvme.so.15.0 00:09:01.023 CC lib/bdev/bdev.o 00:09:01.023 CC lib/bdev/bdev_rpc.o 00:09:01.023 CC lib/bdev/part.o 00:09:01.023 CC lib/bdev/bdev_zone.o 00:09:01.023 CC lib/bdev/scsi_nvme.o 00:09:01.023 LIB libspdk_fsdev.a 00:09:01.282 SO libspdk_fsdev.so.2.0 00:09:01.282 SYMLINK libspdk_fsdev.so 00:09:01.282 SYMLINK libspdk_nvme.so 00:09:01.282 LIB libspdk_event.a 00:09:01.540 SO libspdk_event.so.14.0 00:09:01.540 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:09:01.540 SYMLINK libspdk_event.so 00:09:02.472 LIB libspdk_fuse_dispatcher.a 00:09:02.472 SO libspdk_fuse_dispatcher.so.1.0 00:09:02.472 SYMLINK libspdk_fuse_dispatcher.so 00:09:04.376 LIB libspdk_blob.a 00:09:04.376 SO libspdk_blob.so.12.0 00:09:04.376 SYMLINK libspdk_blob.so 00:09:04.634 CC lib/blobfs/blobfs.o 00:09:04.634 CC lib/blobfs/tree.o 00:09:04.634 CC lib/lvol/lvol.o 00:09:04.892 LIB libspdk_bdev.a 00:09:05.150 SO libspdk_bdev.so.17.0 00:09:05.150 SYMLINK libspdk_bdev.so 00:09:05.408 CC lib/nvmf/ctrlr.o 00:09:05.408 CC lib/nbd/nbd.o 00:09:05.408 CC lib/nvmf/ctrlr_discovery.o 00:09:05.408 CC lib/nbd/nbd_rpc.o 00:09:05.408 CC lib/nvmf/ctrlr_bdev.o 00:09:05.408 CC lib/ftl/ftl_core.o 00:09:05.408 CC lib/ublk/ublk.o 00:09:05.408 CC lib/scsi/dev.o 00:09:05.666 CC lib/ftl/ftl_init.o 00:09:05.666 LIB libspdk_blobfs.a 00:09:05.666 CC lib/scsi/lun.o 00:09:05.666 SO libspdk_blobfs.so.11.0 00:09:05.924 LIB libspdk_lvol.a 00:09:05.924 CC lib/ftl/ftl_layout.o 00:09:05.924 SYMLINK libspdk_blobfs.so 00:09:05.924 CC lib/ftl/ftl_debug.o 00:09:05.924 SO libspdk_lvol.so.11.0 00:09:05.924 CC lib/ftl/ftl_io.o 00:09:05.924 LIB libspdk_nbd.a 00:09:05.924 SO libspdk_nbd.so.7.0 00:09:05.924 SYMLINK libspdk_lvol.so 00:09:05.924 CC lib/ftl/ftl_sb.o 00:09:05.924 SYMLINK libspdk_nbd.so 00:09:05.924 CC lib/ftl/ftl_l2p.o 00:09:05.924 CC lib/ftl/ftl_l2p_flat.o 00:09:06.182 CC lib/ftl/ftl_nv_cache.o 00:09:06.182 CC lib/scsi/port.o 00:09:06.182 CC lib/ftl/ftl_band.o 00:09:06.182 CC lib/ftl/ftl_band_ops.o 00:09:06.182 CC lib/scsi/scsi.o 00:09:06.182 CC lib/ublk/ublk_rpc.o 00:09:06.182 CC lib/ftl/ftl_writer.o 00:09:06.182 CC lib/nvmf/subsystem.o 00:09:06.182 CC lib/nvmf/nvmf.o 00:09:06.439 CC lib/nvmf/nvmf_rpc.o 00:09:06.439 CC lib/scsi/scsi_bdev.o 00:09:06.439 LIB libspdk_ublk.a 00:09:06.439 SO libspdk_ublk.so.3.0 00:09:06.696 CC lib/scsi/scsi_pr.o 00:09:06.696 SYMLINK libspdk_ublk.so 00:09:06.696 CC lib/scsi/scsi_rpc.o 00:09:06.696 CC lib/scsi/task.o 00:09:06.696 CC lib/ftl/ftl_rq.o 00:09:06.696 CC lib/ftl/ftl_reloc.o 00:09:06.955 CC lib/nvmf/transport.o 00:09:06.955 CC lib/nvmf/tcp.o 00:09:07.214 CC lib/nvmf/stubs.o 00:09:07.214 LIB libspdk_scsi.a 00:09:07.214 CC lib/ftl/ftl_l2p_cache.o 00:09:07.214 SO libspdk_scsi.so.9.0 00:09:07.473 SYMLINK libspdk_scsi.so 00:09:07.473 CC lib/ftl/ftl_p2l.o 00:09:07.473 CC lib/nvmf/mdns_server.o 00:09:07.473 CC lib/nvmf/rdma.o 00:09:07.473 CC lib/ftl/ftl_p2l_log.o 00:09:07.733 CC lib/ftl/mngt/ftl_mngt.o 00:09:07.733 CC lib/nvmf/auth.o 00:09:07.733 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:07.992 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:07.992 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:07.992 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:07.992 CC lib/iscsi/conn.o 00:09:07.992 CC lib/vhost/vhost.o 00:09:07.992 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:07.992 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:08.251 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:08.251 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:08.251 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:08.251 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:08.510 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:08.510 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:08.510 CC lib/ftl/utils/ftl_conf.o 00:09:08.510 CC lib/ftl/utils/ftl_md.o 00:09:08.510 CC lib/ftl/utils/ftl_mempool.o 00:09:08.510 CC lib/vhost/vhost_rpc.o 00:09:08.767 CC lib/vhost/vhost_scsi.o 00:09:08.767 CC lib/vhost/vhost_blk.o 00:09:08.767 CC lib/iscsi/init_grp.o 00:09:09.024 CC lib/ftl/utils/ftl_bitmap.o 00:09:09.024 CC lib/vhost/rte_vhost_user.o 00:09:09.024 CC lib/ftl/utils/ftl_property.o 00:09:09.024 CC lib/iscsi/iscsi.o 00:09:09.283 CC lib/iscsi/param.o 00:09:09.283 CC lib/iscsi/portal_grp.o 00:09:09.283 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:09.283 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:09.283 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:09.541 CC lib/iscsi/tgt_node.o 00:09:09.541 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:09.541 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:09.541 CC lib/iscsi/iscsi_subsystem.o 00:09:09.541 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:09.800 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:09:09.800 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:09.800 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:10.059 CC lib/iscsi/iscsi_rpc.o 00:09:10.059 CC lib/iscsi/task.o 00:09:10.059 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:10.059 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:10.059 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:09:10.059 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:09:10.319 LIB libspdk_vhost.a 00:09:10.319 CC lib/ftl/base/ftl_base_dev.o 00:09:10.319 CC lib/ftl/base/ftl_base_bdev.o 00:09:10.319 SO libspdk_vhost.so.8.0 00:09:10.319 CC lib/ftl/ftl_trace.o 00:09:10.319 SYMLINK libspdk_vhost.so 00:09:10.578 LIB libspdk_nvmf.a 00:09:10.578 LIB libspdk_ftl.a 00:09:10.578 SO libspdk_nvmf.so.20.0 00:09:10.837 SO libspdk_ftl.so.9.0 00:09:10.837 SYMLINK libspdk_nvmf.so 00:09:11.096 LIB libspdk_iscsi.a 00:09:11.354 SYMLINK libspdk_ftl.so 00:09:11.354 SO libspdk_iscsi.so.8.0 00:09:11.354 SYMLINK libspdk_iscsi.so 00:09:12.010 CC module/env_dpdk/env_dpdk_rpc.o 00:09:12.010 CC module/scheduler/gscheduler/gscheduler.o 00:09:12.010 CC module/keyring/file/keyring.o 00:09:12.010 CC module/keyring/linux/keyring.o 00:09:12.010 CC module/fsdev/aio/fsdev_aio.o 00:09:12.010 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:12.010 CC module/sock/posix/posix.o 00:09:12.010 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:12.010 CC module/accel/error/accel_error.o 00:09:12.010 CC module/blob/bdev/blob_bdev.o 00:09:12.010 LIB libspdk_env_dpdk_rpc.a 00:09:12.010 SO libspdk_env_dpdk_rpc.so.6.0 00:09:12.010 CC module/keyring/linux/keyring_rpc.o 00:09:12.010 LIB libspdk_scheduler_gscheduler.a 00:09:12.010 SYMLINK libspdk_env_dpdk_rpc.so 00:09:12.010 CC module/keyring/file/keyring_rpc.o 00:09:12.010 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:12.010 SO libspdk_scheduler_gscheduler.so.4.0 00:09:12.010 LIB libspdk_scheduler_dpdk_governor.a 00:09:12.285 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:12.285 CC module/accel/error/accel_error_rpc.o 00:09:12.285 LIB libspdk_scheduler_dynamic.a 00:09:12.285 SYMLINK libspdk_scheduler_gscheduler.so 00:09:12.285 CC module/fsdev/aio/linux_aio_mgr.o 00:09:12.285 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:12.285 SO libspdk_scheduler_dynamic.so.4.0 00:09:12.285 LIB libspdk_keyring_linux.a 00:09:12.285 LIB libspdk_keyring_file.a 00:09:12.285 SO libspdk_keyring_linux.so.1.0 00:09:12.285 LIB libspdk_blob_bdev.a 00:09:12.285 SYMLINK libspdk_scheduler_dynamic.so 00:09:12.285 SO libspdk_keyring_file.so.2.0 00:09:12.285 SO libspdk_blob_bdev.so.12.0 00:09:12.285 SYMLINK libspdk_keyring_linux.so 00:09:12.285 LIB libspdk_accel_error.a 00:09:12.285 SYMLINK libspdk_keyring_file.so 00:09:12.285 SYMLINK libspdk_blob_bdev.so 00:09:12.285 SO libspdk_accel_error.so.2.0 00:09:12.285 CC module/accel/ioat/accel_ioat.o 00:09:12.285 CC module/accel/ioat/accel_ioat_rpc.o 00:09:12.545 SYMLINK libspdk_accel_error.so 00:09:12.545 CC module/accel/dsa/accel_dsa.o 00:09:12.545 CC module/accel/dsa/accel_dsa_rpc.o 00:09:12.545 CC module/accel/iaa/accel_iaa.o 00:09:12.545 LIB libspdk_accel_ioat.a 00:09:12.545 CC module/bdev/error/vbdev_error.o 00:09:12.545 CC module/bdev/delay/vbdev_delay.o 00:09:12.545 CC module/blobfs/bdev/blobfs_bdev.o 00:09:12.545 SO libspdk_accel_ioat.so.6.0 00:09:12.545 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:12.804 SYMLINK libspdk_accel_ioat.so 00:09:12.804 CC module/accel/iaa/accel_iaa_rpc.o 00:09:12.804 CC module/bdev/error/vbdev_error_rpc.o 00:09:12.804 CC module/bdev/gpt/gpt.o 00:09:12.804 LIB libspdk_accel_dsa.a 00:09:12.804 LIB libspdk_fsdev_aio.a 00:09:12.804 SO libspdk_accel_dsa.so.5.0 00:09:12.804 SO libspdk_fsdev_aio.so.1.0 00:09:12.804 LIB libspdk_sock_posix.a 00:09:12.804 LIB libspdk_accel_iaa.a 00:09:12.804 LIB libspdk_blobfs_bdev.a 00:09:12.804 CC module/bdev/gpt/vbdev_gpt.o 00:09:12.804 SYMLINK libspdk_accel_dsa.so 00:09:12.804 SO libspdk_sock_posix.so.6.0 00:09:12.804 SO libspdk_accel_iaa.so.3.0 00:09:13.063 SO libspdk_blobfs_bdev.so.6.0 00:09:13.063 SYMLINK libspdk_fsdev_aio.so 00:09:13.063 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:13.063 LIB libspdk_bdev_error.a 00:09:13.063 SYMLINK libspdk_blobfs_bdev.so 00:09:13.063 SYMLINK libspdk_accel_iaa.so 00:09:13.063 SYMLINK libspdk_sock_posix.so 00:09:13.063 SO libspdk_bdev_error.so.6.0 00:09:13.063 SYMLINK libspdk_bdev_error.so 00:09:13.063 CC module/bdev/lvol/vbdev_lvol.o 00:09:13.063 CC module/bdev/malloc/bdev_malloc.o 00:09:13.063 LIB libspdk_bdev_delay.a 00:09:13.063 CC module/bdev/null/bdev_null.o 00:09:13.321 SO libspdk_bdev_delay.so.6.0 00:09:13.321 CC module/bdev/nvme/bdev_nvme.o 00:09:13.322 CC module/bdev/passthru/vbdev_passthru.o 00:09:13.322 CC module/bdev/raid/bdev_raid.o 00:09:13.322 LIB libspdk_bdev_gpt.a 00:09:13.322 CC module/bdev/split/vbdev_split.o 00:09:13.322 SO libspdk_bdev_gpt.so.6.0 00:09:13.322 SYMLINK libspdk_bdev_delay.so 00:09:13.322 CC module/bdev/raid/bdev_raid_rpc.o 00:09:13.322 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:13.322 SYMLINK libspdk_bdev_gpt.so 00:09:13.322 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:13.580 CC module/bdev/null/bdev_null_rpc.o 00:09:13.580 CC module/bdev/split/vbdev_split_rpc.o 00:09:13.580 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:13.580 CC module/bdev/raid/bdev_raid_sb.o 00:09:13.580 LIB libspdk_bdev_passthru.a 00:09:13.580 SO libspdk_bdev_passthru.so.6.0 00:09:13.580 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:13.838 LIB libspdk_bdev_null.a 00:09:13.838 SYMLINK libspdk_bdev_passthru.so 00:09:13.838 LIB libspdk_bdev_split.a 00:09:13.838 CC module/bdev/raid/raid0.o 00:09:13.839 LIB libspdk_bdev_zone_block.a 00:09:13.839 SO libspdk_bdev_null.so.6.0 00:09:13.839 SO libspdk_bdev_split.so.6.0 00:09:13.839 SO libspdk_bdev_zone_block.so.6.0 00:09:13.839 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:13.839 SYMLINK libspdk_bdev_null.so 00:09:13.839 SYMLINK libspdk_bdev_split.so 00:09:13.839 SYMLINK libspdk_bdev_zone_block.so 00:09:13.839 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:13.839 LIB libspdk_bdev_malloc.a 00:09:13.839 CC module/bdev/aio/bdev_aio.o 00:09:13.839 CC module/bdev/nvme/nvme_rpc.o 00:09:13.839 SO libspdk_bdev_malloc.so.6.0 00:09:14.097 SYMLINK libspdk_bdev_malloc.so 00:09:14.097 CC module/bdev/nvme/bdev_mdns_client.o 00:09:14.097 CC module/bdev/ftl/bdev_ftl.o 00:09:14.097 CC module/bdev/iscsi/bdev_iscsi.o 00:09:14.097 CC module/bdev/nvme/vbdev_opal.o 00:09:14.097 CC module/bdev/aio/bdev_aio_rpc.o 00:09:14.097 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:14.355 LIB libspdk_bdev_lvol.a 00:09:14.355 SO libspdk_bdev_lvol.so.6.0 00:09:14.355 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:14.355 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:14.355 LIB libspdk_bdev_aio.a 00:09:14.355 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:14.355 SYMLINK libspdk_bdev_lvol.so 00:09:14.355 SO libspdk_bdev_aio.so.6.0 00:09:14.613 SYMLINK libspdk_bdev_aio.so 00:09:14.613 CC module/bdev/raid/raid1.o 00:09:14.613 CC module/bdev/raid/concat.o 00:09:14.613 CC module/bdev/raid/raid5f.o 00:09:14.613 LIB libspdk_bdev_iscsi.a 00:09:14.613 SO libspdk_bdev_iscsi.so.6.0 00:09:14.613 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:14.613 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:14.613 LIB libspdk_bdev_ftl.a 00:09:14.613 SYMLINK libspdk_bdev_iscsi.so 00:09:14.613 SO libspdk_bdev_ftl.so.6.0 00:09:14.613 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:14.613 SYMLINK libspdk_bdev_ftl.so 00:09:15.179 LIB libspdk_bdev_raid.a 00:09:15.179 LIB libspdk_bdev_virtio.a 00:09:15.179 SO libspdk_bdev_raid.so.6.0 00:09:15.435 SO libspdk_bdev_virtio.so.6.0 00:09:15.435 SYMLINK libspdk_bdev_raid.so 00:09:15.435 SYMLINK libspdk_bdev_virtio.so 00:09:16.807 LIB libspdk_bdev_nvme.a 00:09:16.807 SO libspdk_bdev_nvme.so.7.1 00:09:17.065 SYMLINK libspdk_bdev_nvme.so 00:09:17.631 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:17.631 CC module/event/subsystems/sock/sock.o 00:09:17.631 CC module/event/subsystems/scheduler/scheduler.o 00:09:17.631 CC module/event/subsystems/fsdev/fsdev.o 00:09:17.631 CC module/event/subsystems/keyring/keyring.o 00:09:17.631 CC module/event/subsystems/iobuf/iobuf.o 00:09:17.631 CC module/event/subsystems/vmd/vmd.o 00:09:17.631 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:17.631 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:17.631 LIB libspdk_event_keyring.a 00:09:17.631 LIB libspdk_event_scheduler.a 00:09:17.631 LIB libspdk_event_vhost_blk.a 00:09:17.631 SO libspdk_event_keyring.so.1.0 00:09:17.631 LIB libspdk_event_vmd.a 00:09:17.631 SO libspdk_event_scheduler.so.4.0 00:09:17.632 LIB libspdk_event_sock.a 00:09:17.632 LIB libspdk_event_iobuf.a 00:09:17.632 LIB libspdk_event_fsdev.a 00:09:17.632 SO libspdk_event_vhost_blk.so.3.0 00:09:17.632 SO libspdk_event_vmd.so.6.0 00:09:17.632 SO libspdk_event_sock.so.5.0 00:09:17.890 SO libspdk_event_iobuf.so.3.0 00:09:17.890 SO libspdk_event_fsdev.so.1.0 00:09:17.890 SYMLINK libspdk_event_keyring.so 00:09:17.890 SYMLINK libspdk_event_scheduler.so 00:09:17.890 SYMLINK libspdk_event_vhost_blk.so 00:09:17.890 SYMLINK libspdk_event_sock.so 00:09:17.890 SYMLINK libspdk_event_vmd.so 00:09:17.890 SYMLINK libspdk_event_fsdev.so 00:09:17.890 SYMLINK libspdk_event_iobuf.so 00:09:18.291 CC module/event/subsystems/accel/accel.o 00:09:18.291 LIB libspdk_event_accel.a 00:09:18.291 SO libspdk_event_accel.so.6.0 00:09:18.550 SYMLINK libspdk_event_accel.so 00:09:18.550 CC module/event/subsystems/bdev/bdev.o 00:09:18.806 LIB libspdk_event_bdev.a 00:09:19.065 SO libspdk_event_bdev.so.6.0 00:09:19.065 SYMLINK libspdk_event_bdev.so 00:09:19.323 CC module/event/subsystems/scsi/scsi.o 00:09:19.323 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:19.323 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:19.323 CC module/event/subsystems/nbd/nbd.o 00:09:19.323 CC module/event/subsystems/ublk/ublk.o 00:09:19.323 LIB libspdk_event_ublk.a 00:09:19.582 LIB libspdk_event_nbd.a 00:09:19.582 LIB libspdk_event_scsi.a 00:09:19.582 SO libspdk_event_ublk.so.3.0 00:09:19.582 SO libspdk_event_nbd.so.6.0 00:09:19.582 SO libspdk_event_scsi.so.6.0 00:09:19.582 SYMLINK libspdk_event_nbd.so 00:09:19.582 SYMLINK libspdk_event_ublk.so 00:09:19.582 SYMLINK libspdk_event_scsi.so 00:09:19.582 LIB libspdk_event_nvmf.a 00:09:19.582 SO libspdk_event_nvmf.so.6.0 00:09:19.841 SYMLINK libspdk_event_nvmf.so 00:09:19.841 CC module/event/subsystems/iscsi/iscsi.o 00:09:19.841 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:20.101 LIB libspdk_event_vhost_scsi.a 00:09:20.101 SO libspdk_event_vhost_scsi.so.3.0 00:09:20.101 LIB libspdk_event_iscsi.a 00:09:20.101 SO libspdk_event_iscsi.so.6.0 00:09:20.101 SYMLINK libspdk_event_vhost_scsi.so 00:09:20.101 SYMLINK libspdk_event_iscsi.so 00:09:20.360 SO libspdk.so.6.0 00:09:20.360 SYMLINK libspdk.so 00:09:20.619 TEST_HEADER include/spdk/accel.h 00:09:20.619 TEST_HEADER include/spdk/accel_module.h 00:09:20.619 TEST_HEADER include/spdk/assert.h 00:09:20.619 CXX app/trace/trace.o 00:09:20.619 TEST_HEADER include/spdk/barrier.h 00:09:20.619 TEST_HEADER include/spdk/base64.h 00:09:20.619 TEST_HEADER include/spdk/bdev.h 00:09:20.619 TEST_HEADER include/spdk/bdev_module.h 00:09:20.619 CC app/trace_record/trace_record.o 00:09:20.619 TEST_HEADER include/spdk/bdev_zone.h 00:09:20.619 TEST_HEADER include/spdk/bit_array.h 00:09:20.619 TEST_HEADER include/spdk/bit_pool.h 00:09:20.619 TEST_HEADER include/spdk/blob_bdev.h 00:09:20.619 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:20.619 TEST_HEADER include/spdk/blobfs.h 00:09:20.619 TEST_HEADER include/spdk/blob.h 00:09:20.619 TEST_HEADER include/spdk/conf.h 00:09:20.619 TEST_HEADER include/spdk/config.h 00:09:20.619 TEST_HEADER include/spdk/cpuset.h 00:09:20.619 TEST_HEADER include/spdk/crc16.h 00:09:20.619 TEST_HEADER include/spdk/crc32.h 00:09:20.619 TEST_HEADER include/spdk/crc64.h 00:09:20.619 CC app/iscsi_tgt/iscsi_tgt.o 00:09:20.619 TEST_HEADER include/spdk/dif.h 00:09:20.619 TEST_HEADER include/spdk/dma.h 00:09:20.619 TEST_HEADER include/spdk/endian.h 00:09:20.619 TEST_HEADER include/spdk/env_dpdk.h 00:09:20.619 TEST_HEADER include/spdk/env.h 00:09:20.619 TEST_HEADER include/spdk/event.h 00:09:20.619 TEST_HEADER include/spdk/fd_group.h 00:09:20.619 TEST_HEADER include/spdk/fd.h 00:09:20.619 TEST_HEADER include/spdk/file.h 00:09:20.619 TEST_HEADER include/spdk/fsdev.h 00:09:20.619 TEST_HEADER include/spdk/fsdev_module.h 00:09:20.619 TEST_HEADER include/spdk/ftl.h 00:09:20.619 CC app/nvmf_tgt/nvmf_main.o 00:09:20.619 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:20.619 TEST_HEADER include/spdk/gpt_spec.h 00:09:20.878 TEST_HEADER include/spdk/hexlify.h 00:09:20.878 TEST_HEADER include/spdk/histogram_data.h 00:09:20.878 CC examples/ioat/perf/perf.o 00:09:20.878 TEST_HEADER include/spdk/idxd.h 00:09:20.878 TEST_HEADER include/spdk/idxd_spec.h 00:09:20.878 CC examples/util/zipf/zipf.o 00:09:20.878 TEST_HEADER include/spdk/init.h 00:09:20.878 TEST_HEADER include/spdk/ioat.h 00:09:20.878 TEST_HEADER include/spdk/ioat_spec.h 00:09:20.878 CC test/thread/poller_perf/poller_perf.o 00:09:20.878 TEST_HEADER include/spdk/iscsi_spec.h 00:09:20.878 TEST_HEADER include/spdk/json.h 00:09:20.878 TEST_HEADER include/spdk/jsonrpc.h 00:09:20.878 TEST_HEADER include/spdk/keyring.h 00:09:20.878 TEST_HEADER include/spdk/keyring_module.h 00:09:20.878 TEST_HEADER include/spdk/likely.h 00:09:20.878 TEST_HEADER include/spdk/log.h 00:09:20.878 TEST_HEADER include/spdk/lvol.h 00:09:20.878 TEST_HEADER include/spdk/md5.h 00:09:20.878 TEST_HEADER include/spdk/memory.h 00:09:20.878 TEST_HEADER include/spdk/mmio.h 00:09:20.878 TEST_HEADER include/spdk/nbd.h 00:09:20.878 TEST_HEADER include/spdk/net.h 00:09:20.878 TEST_HEADER include/spdk/notify.h 00:09:20.878 TEST_HEADER include/spdk/nvme.h 00:09:20.878 CC test/app/bdev_svc/bdev_svc.o 00:09:20.878 TEST_HEADER include/spdk/nvme_intel.h 00:09:20.878 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:20.878 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:20.878 CC test/dma/test_dma/test_dma.o 00:09:20.878 TEST_HEADER include/spdk/nvme_spec.h 00:09:20.878 TEST_HEADER include/spdk/nvme_zns.h 00:09:20.878 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:20.878 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:20.878 TEST_HEADER include/spdk/nvmf.h 00:09:20.878 TEST_HEADER include/spdk/nvmf_spec.h 00:09:20.878 TEST_HEADER include/spdk/nvmf_transport.h 00:09:20.878 TEST_HEADER include/spdk/opal.h 00:09:20.878 TEST_HEADER include/spdk/opal_spec.h 00:09:20.878 TEST_HEADER include/spdk/pci_ids.h 00:09:20.878 TEST_HEADER include/spdk/pipe.h 00:09:20.878 TEST_HEADER include/spdk/queue.h 00:09:20.878 TEST_HEADER include/spdk/reduce.h 00:09:20.878 TEST_HEADER include/spdk/rpc.h 00:09:20.878 TEST_HEADER include/spdk/scheduler.h 00:09:20.878 TEST_HEADER include/spdk/scsi.h 00:09:20.878 TEST_HEADER include/spdk/scsi_spec.h 00:09:20.878 TEST_HEADER include/spdk/sock.h 00:09:20.878 TEST_HEADER include/spdk/stdinc.h 00:09:20.878 TEST_HEADER include/spdk/string.h 00:09:20.878 TEST_HEADER include/spdk/thread.h 00:09:20.878 TEST_HEADER include/spdk/trace.h 00:09:20.878 TEST_HEADER include/spdk/trace_parser.h 00:09:20.878 TEST_HEADER include/spdk/tree.h 00:09:20.878 TEST_HEADER include/spdk/ublk.h 00:09:20.878 TEST_HEADER include/spdk/util.h 00:09:20.878 TEST_HEADER include/spdk/uuid.h 00:09:20.878 TEST_HEADER include/spdk/version.h 00:09:20.878 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:20.878 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:20.878 TEST_HEADER include/spdk/vhost.h 00:09:20.878 TEST_HEADER include/spdk/vmd.h 00:09:20.878 LINK zipf 00:09:20.878 TEST_HEADER include/spdk/xor.h 00:09:21.136 TEST_HEADER include/spdk/zipf.h 00:09:21.136 CXX test/cpp_headers/accel.o 00:09:21.136 LINK iscsi_tgt 00:09:21.136 LINK poller_perf 00:09:21.136 LINK spdk_trace_record 00:09:21.136 LINK ioat_perf 00:09:21.136 LINK nvmf_tgt 00:09:21.136 LINK bdev_svc 00:09:21.136 CXX test/cpp_headers/accel_module.o 00:09:21.136 LINK spdk_trace 00:09:21.395 CC examples/ioat/verify/verify.o 00:09:21.395 CC app/spdk_tgt/spdk_tgt.o 00:09:21.395 CC test/rpc_client/rpc_client_test.o 00:09:21.395 CXX test/cpp_headers/assert.o 00:09:21.395 CC app/spdk_lspci/spdk_lspci.o 00:09:21.395 CC test/event/event_perf/event_perf.o 00:09:21.395 CC test/env/mem_callbacks/mem_callbacks.o 00:09:21.654 LINK test_dma 00:09:21.654 CC test/app/histogram_perf/histogram_perf.o 00:09:21.654 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:21.654 LINK spdk_lspci 00:09:21.654 LINK spdk_tgt 00:09:21.654 LINK rpc_client_test 00:09:21.654 CXX test/cpp_headers/barrier.o 00:09:21.654 LINK event_perf 00:09:21.654 LINK verify 00:09:21.654 LINK histogram_perf 00:09:21.912 CC test/app/jsoncat/jsoncat.o 00:09:21.912 CXX test/cpp_headers/base64.o 00:09:21.912 CC test/app/stub/stub.o 00:09:21.912 CC test/event/reactor/reactor.o 00:09:21.912 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:21.912 CC app/spdk_nvme_perf/perf.o 00:09:22.171 CXX test/cpp_headers/bdev.o 00:09:22.171 LINK jsoncat 00:09:22.171 CC test/event/reactor_perf/reactor_perf.o 00:09:22.171 CC test/accel/dif/dif.o 00:09:22.171 LINK nvme_fuzz 00:09:22.171 LINK reactor 00:09:22.171 LINK stub 00:09:22.171 LINK interrupt_tgt 00:09:22.171 LINK mem_callbacks 00:09:22.171 LINK reactor_perf 00:09:22.171 CXX test/cpp_headers/bdev_module.o 00:09:22.429 CC test/env/vtophys/vtophys.o 00:09:22.429 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:22.429 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:22.429 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:22.429 CXX test/cpp_headers/bdev_zone.o 00:09:22.429 CC test/event/app_repeat/app_repeat.o 00:09:22.688 CC test/event/scheduler/scheduler.o 00:09:22.688 LINK vtophys 00:09:22.688 CC examples/thread/thread/thread_ex.o 00:09:22.688 LINK env_dpdk_post_init 00:09:22.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:22.688 LINK app_repeat 00:09:22.688 CXX test/cpp_headers/bit_array.o 00:09:22.947 CC test/env/memory/memory_ut.o 00:09:22.947 LINK scheduler 00:09:22.947 CXX test/cpp_headers/bit_pool.o 00:09:22.947 CC test/env/pci/pci_ut.o 00:09:22.947 LINK thread 00:09:22.947 LINK dif 00:09:23.206 CC examples/sock/hello_world/hello_sock.o 00:09:23.206 LINK spdk_nvme_perf 00:09:23.206 CXX test/cpp_headers/blob_bdev.o 00:09:23.206 LINK vhost_fuzz 00:09:23.206 CC test/blobfs/mkfs/mkfs.o 00:09:23.464 CXX test/cpp_headers/blobfs_bdev.o 00:09:23.464 CC app/spdk_nvme_identify/identify.o 00:09:23.464 CC examples/vmd/lsvmd/lsvmd.o 00:09:23.464 LINK hello_sock 00:09:23.464 LINK pci_ut 00:09:23.464 CC examples/idxd/perf/perf.o 00:09:23.464 CC app/spdk_nvme_discover/discovery_aer.o 00:09:23.464 LINK mkfs 00:09:23.464 LINK lsvmd 00:09:23.464 CXX test/cpp_headers/blobfs.o 00:09:23.723 CXX test/cpp_headers/blob.o 00:09:23.723 CXX test/cpp_headers/conf.o 00:09:23.723 LINK spdk_nvme_discover 00:09:23.981 CC examples/vmd/led/led.o 00:09:23.981 CXX test/cpp_headers/config.o 00:09:23.981 LINK idxd_perf 00:09:23.981 CXX test/cpp_headers/cpuset.o 00:09:23.981 CC examples/accel/perf/accel_perf.o 00:09:23.981 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:23.981 LINK led 00:09:23.981 CC app/spdk_top/spdk_top.o 00:09:24.240 CC examples/blob/hello_world/hello_blob.o 00:09:24.240 CXX test/cpp_headers/crc16.o 00:09:24.240 CC examples/nvme/hello_world/hello_world.o 00:09:24.498 CXX test/cpp_headers/crc32.o 00:09:24.498 LINK memory_ut 00:09:24.498 CC examples/nvme/reconnect/reconnect.o 00:09:24.498 LINK hello_blob 00:09:24.498 LINK hello_fsdev 00:09:24.498 LINK spdk_nvme_identify 00:09:24.498 CXX test/cpp_headers/crc64.o 00:09:24.757 LINK hello_world 00:09:24.757 CXX test/cpp_headers/dif.o 00:09:24.757 LINK accel_perf 00:09:24.757 CC app/vhost/vhost.o 00:09:24.757 CC examples/blob/cli/blobcli.o 00:09:24.757 LINK iscsi_fuzz 00:09:24.757 CXX test/cpp_headers/dma.o 00:09:24.757 LINK reconnect 00:09:25.016 CC test/nvme/aer/aer.o 00:09:25.016 CC test/lvol/esnap/esnap.o 00:09:25.016 LINK vhost 00:09:25.016 CXX test/cpp_headers/endian.o 00:09:25.016 CC app/spdk_dd/spdk_dd.o 00:09:25.274 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:25.274 CC app/fio/nvme/fio_plugin.o 00:09:25.274 CXX test/cpp_headers/env_dpdk.o 00:09:25.274 CXX test/cpp_headers/env.o 00:09:25.274 LINK spdk_top 00:09:25.274 CC test/bdev/bdevio/bdevio.o 00:09:25.274 LINK aer 00:09:25.533 LINK blobcli 00:09:25.533 CXX test/cpp_headers/event.o 00:09:25.533 CXX test/cpp_headers/fd_group.o 00:09:25.533 LINK spdk_dd 00:09:25.791 CC test/nvme/reset/reset.o 00:09:25.791 CC examples/bdev/hello_world/hello_bdev.o 00:09:25.791 CXX test/cpp_headers/fd.o 00:09:25.791 LINK bdevio 00:09:25.791 CC examples/bdev/bdevperf/bdevperf.o 00:09:25.791 CXX test/cpp_headers/file.o 00:09:25.791 LINK nvme_manage 00:09:25.791 CC app/fio/bdev/fio_plugin.o 00:09:26.050 LINK spdk_nvme 00:09:26.050 LINK hello_bdev 00:09:26.050 CC examples/nvme/arbitration/arbitration.o 00:09:26.050 LINK reset 00:09:26.050 CXX test/cpp_headers/fsdev.o 00:09:26.050 CXX test/cpp_headers/fsdev_module.o 00:09:26.308 CC examples/nvme/hotplug/hotplug.o 00:09:26.308 CC test/nvme/sgl/sgl.o 00:09:26.308 CXX test/cpp_headers/ftl.o 00:09:26.308 CXX test/cpp_headers/fuse_dispatcher.o 00:09:26.567 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:26.567 LINK arbitration 00:09:26.567 CXX test/cpp_headers/gpt_spec.o 00:09:26.567 CXX test/cpp_headers/hexlify.o 00:09:26.567 LINK hotplug 00:09:26.567 CC examples/nvme/abort/abort.o 00:09:26.567 LINK spdk_bdev 00:09:26.567 LINK sgl 00:09:26.826 CXX test/cpp_headers/histogram_data.o 00:09:26.826 CXX test/cpp_headers/idxd.o 00:09:26.826 CXX test/cpp_headers/idxd_spec.o 00:09:26.826 LINK cmb_copy 00:09:26.826 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:26.826 CC test/nvme/e2edp/nvme_dp.o 00:09:26.826 CXX test/cpp_headers/init.o 00:09:26.826 CC test/nvme/overhead/overhead.o 00:09:26.826 CXX test/cpp_headers/ioat.o 00:09:26.826 CXX test/cpp_headers/ioat_spec.o 00:09:26.826 CXX test/cpp_headers/iscsi_spec.o 00:09:27.085 LINK pmr_persistence 00:09:27.085 LINK bdevperf 00:09:27.085 LINK abort 00:09:27.085 CXX test/cpp_headers/json.o 00:09:27.085 LINK nvme_dp 00:09:27.345 CC test/nvme/err_injection/err_injection.o 00:09:27.345 CC test/nvme/startup/startup.o 00:09:27.345 CC test/nvme/reserve/reserve.o 00:09:27.345 CXX test/cpp_headers/jsonrpc.o 00:09:27.345 LINK overhead 00:09:27.345 CXX test/cpp_headers/keyring.o 00:09:27.345 CC test/nvme/simple_copy/simple_copy.o 00:09:27.345 LINK err_injection 00:09:27.345 LINK startup 00:09:27.603 CXX test/cpp_headers/keyring_module.o 00:09:27.603 CC examples/nvmf/nvmf/nvmf.o 00:09:27.603 LINK reserve 00:09:27.603 CC test/nvme/connect_stress/connect_stress.o 00:09:27.603 CC test/nvme/boot_partition/boot_partition.o 00:09:27.603 CC test/nvme/compliance/nvme_compliance.o 00:09:27.603 LINK simple_copy 00:09:27.603 CXX test/cpp_headers/likely.o 00:09:27.603 CXX test/cpp_headers/log.o 00:09:27.872 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:27.872 CC test/nvme/fused_ordering/fused_ordering.o 00:09:27.872 LINK boot_partition 00:09:27.872 LINK connect_stress 00:09:27.872 CXX test/cpp_headers/lvol.o 00:09:27.872 LINK nvmf 00:09:27.872 CC test/nvme/fdp/fdp.o 00:09:27.872 LINK doorbell_aers 00:09:28.129 CXX test/cpp_headers/md5.o 00:09:28.129 LINK fused_ordering 00:09:28.129 CXX test/cpp_headers/memory.o 00:09:28.129 CC test/nvme/cuse/cuse.o 00:09:28.129 CXX test/cpp_headers/mmio.o 00:09:28.129 LINK nvme_compliance 00:09:28.129 CXX test/cpp_headers/nbd.o 00:09:28.129 CXX test/cpp_headers/net.o 00:09:28.129 CXX test/cpp_headers/notify.o 00:09:28.129 CXX test/cpp_headers/nvme.o 00:09:28.129 CXX test/cpp_headers/nvme_intel.o 00:09:28.387 CXX test/cpp_headers/nvme_ocssd.o 00:09:28.387 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:28.387 CXX test/cpp_headers/nvme_spec.o 00:09:28.387 CXX test/cpp_headers/nvme_zns.o 00:09:28.387 CXX test/cpp_headers/nvmf_cmd.o 00:09:28.387 LINK fdp 00:09:28.387 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:28.387 CXX test/cpp_headers/nvmf.o 00:09:28.387 CXX test/cpp_headers/nvmf_spec.o 00:09:28.387 CXX test/cpp_headers/nvmf_transport.o 00:09:28.645 CXX test/cpp_headers/opal.o 00:09:28.645 CXX test/cpp_headers/opal_spec.o 00:09:28.645 CXX test/cpp_headers/pci_ids.o 00:09:28.645 CXX test/cpp_headers/pipe.o 00:09:28.645 CXX test/cpp_headers/queue.o 00:09:28.645 CXX test/cpp_headers/reduce.o 00:09:28.645 CXX test/cpp_headers/rpc.o 00:09:28.645 CXX test/cpp_headers/scheduler.o 00:09:28.645 CXX test/cpp_headers/scsi.o 00:09:28.645 CXX test/cpp_headers/scsi_spec.o 00:09:28.902 CXX test/cpp_headers/sock.o 00:09:28.902 CXX test/cpp_headers/stdinc.o 00:09:28.902 CXX test/cpp_headers/string.o 00:09:28.902 CXX test/cpp_headers/thread.o 00:09:28.902 CXX test/cpp_headers/trace.o 00:09:28.902 CXX test/cpp_headers/trace_parser.o 00:09:28.902 CXX test/cpp_headers/tree.o 00:09:28.902 CXX test/cpp_headers/ublk.o 00:09:28.902 CXX test/cpp_headers/util.o 00:09:28.902 CXX test/cpp_headers/uuid.o 00:09:28.902 CXX test/cpp_headers/version.o 00:09:28.902 CXX test/cpp_headers/vfio_user_pci.o 00:09:28.902 CXX test/cpp_headers/vfio_user_spec.o 00:09:29.161 CXX test/cpp_headers/vhost.o 00:09:29.161 CXX test/cpp_headers/vmd.o 00:09:29.161 CXX test/cpp_headers/xor.o 00:09:29.161 CXX test/cpp_headers/zipf.o 00:09:29.726 LINK cuse 00:09:33.015 LINK esnap 00:09:33.583 00:09:33.583 real 1m55.449s 00:09:33.583 user 10m45.237s 00:09:33.583 sys 2m3.595s 00:09:33.583 13:03:39 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:33.583 ************************************ 00:09:33.583 END TEST make 00:09:33.583 ************************************ 00:09:33.583 13:03:39 make -- common/autotest_common.sh@10 -- $ set +x 00:09:33.583 13:03:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:33.583 13:03:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:33.583 13:03:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:33.583 13:03:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:33.583 13:03:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:33.583 13:03:39 -- pm/common@44 -- $ pid=5254 00:09:33.583 13:03:39 -- pm/common@50 -- $ kill -TERM 5254 00:09:33.583 13:03:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:33.583 13:03:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:33.583 13:03:39 -- pm/common@44 -- $ pid=5255 00:09:33.583 13:03:39 -- pm/common@50 -- $ kill -TERM 5255 00:09:33.583 13:03:39 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:33.583 13:03:39 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:33.583 13:03:40 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.583 13:03:40 -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.583 13:03:40 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.842 13:03:40 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.842 13:03:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.842 13:03:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.842 13:03:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.842 13:03:40 -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.842 13:03:40 -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.842 13:03:40 -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.842 13:03:40 -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.842 13:03:40 -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.842 13:03:40 -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.842 13:03:40 -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.842 13:03:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.842 13:03:40 -- scripts/common.sh@344 -- # case "$op" in 00:09:33.842 13:03:40 -- scripts/common.sh@345 -- # : 1 00:09:33.842 13:03:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.842 13:03:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.842 13:03:40 -- scripts/common.sh@365 -- # decimal 1 00:09:33.842 13:03:40 -- scripts/common.sh@353 -- # local d=1 00:09:33.842 13:03:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.842 13:03:40 -- scripts/common.sh@355 -- # echo 1 00:09:33.842 13:03:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.842 13:03:40 -- scripts/common.sh@366 -- # decimal 2 00:09:33.842 13:03:40 -- scripts/common.sh@353 -- # local d=2 00:09:33.842 13:03:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.842 13:03:40 -- scripts/common.sh@355 -- # echo 2 00:09:33.842 13:03:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.842 13:03:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.842 13:03:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.842 13:03:40 -- scripts/common.sh@368 -- # return 0 00:09:33.842 13:03:40 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.842 13:03:40 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.842 --rc genhtml_branch_coverage=1 00:09:33.842 --rc genhtml_function_coverage=1 00:09:33.842 --rc genhtml_legend=1 00:09:33.842 --rc geninfo_all_blocks=1 00:09:33.842 --rc geninfo_unexecuted_blocks=1 00:09:33.842 00:09:33.842 ' 00:09:33.842 13:03:40 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.842 --rc genhtml_branch_coverage=1 00:09:33.842 --rc genhtml_function_coverage=1 00:09:33.842 --rc genhtml_legend=1 00:09:33.842 --rc geninfo_all_blocks=1 00:09:33.842 --rc geninfo_unexecuted_blocks=1 00:09:33.842 00:09:33.842 ' 00:09:33.842 13:03:40 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.842 --rc genhtml_branch_coverage=1 00:09:33.842 --rc genhtml_function_coverage=1 00:09:33.842 --rc genhtml_legend=1 00:09:33.842 --rc geninfo_all_blocks=1 00:09:33.842 --rc geninfo_unexecuted_blocks=1 00:09:33.842 00:09:33.842 ' 00:09:33.842 13:03:40 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.842 --rc genhtml_branch_coverage=1 00:09:33.842 --rc genhtml_function_coverage=1 00:09:33.842 --rc genhtml_legend=1 00:09:33.842 --rc geninfo_all_blocks=1 00:09:33.842 --rc geninfo_unexecuted_blocks=1 00:09:33.842 00:09:33.842 ' 00:09:33.842 13:03:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.842 13:03:40 -- nvmf/common.sh@7 -- # uname -s 00:09:33.842 13:03:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.842 13:03:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.842 13:03:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.842 13:03:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.842 13:03:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.842 13:03:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.842 13:03:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.843 13:03:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.843 13:03:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.843 13:03:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.843 13:03:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c28d152-baac-47ce-8835-611fa8ea9449 00:09:33.843 13:03:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=9c28d152-baac-47ce-8835-611fa8ea9449 00:09:33.843 13:03:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.843 13:03:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.843 13:03:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.843 13:03:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.843 13:03:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.843 13:03:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.843 13:03:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.843 13:03:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.843 13:03:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.843 13:03:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.843 13:03:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.843 13:03:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.843 13:03:40 -- paths/export.sh@5 -- # export PATH 00:09:33.843 13:03:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.843 13:03:40 -- nvmf/common.sh@51 -- # : 0 00:09:33.843 13:03:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.843 13:03:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.843 13:03:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.843 13:03:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.843 13:03:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.843 13:03:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.843 13:03:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.843 13:03:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.843 13:03:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.843 13:03:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:33.843 13:03:40 -- spdk/autotest.sh@32 -- # uname -s 00:09:33.843 13:03:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:33.843 13:03:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:33.843 13:03:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:33.843 13:03:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:33.843 13:03:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:33.843 13:03:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:33.843 13:03:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:33.843 13:03:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:33.843 13:03:40 -- spdk/autotest.sh@48 -- # udevadm_pid=54504 00:09:33.843 13:03:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:33.843 13:03:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:33.843 13:03:40 -- pm/common@17 -- # local monitor 00:09:33.843 13:03:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:33.843 13:03:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:33.843 13:03:40 -- pm/common@25 -- # sleep 1 00:09:33.843 13:03:40 -- pm/common@21 -- # date +%s 00:09:33.843 13:03:40 -- pm/common@21 -- # date +%s 00:09:33.843 13:03:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490220 00:09:33.843 13:03:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490220 00:09:33.843 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490220_collect-vmstat.pm.log 00:09:33.843 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490220_collect-cpu-load.pm.log 00:09:34.780 13:03:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:34.780 13:03:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:34.780 13:03:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.780 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:09:34.780 13:03:41 -- spdk/autotest.sh@59 -- # create_test_list 00:09:34.780 13:03:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:34.780 13:03:41 -- common/autotest_common.sh@10 -- # set +x 00:09:35.039 13:03:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:35.039 13:03:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:35.039 13:03:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:35.039 13:03:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:35.039 13:03:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:35.039 13:03:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:35.039 13:03:41 -- common/autotest_common.sh@1457 -- # uname 00:09:35.039 13:03:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:35.039 13:03:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:35.039 13:03:41 -- common/autotest_common.sh@1477 -- # uname 00:09:35.039 13:03:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:35.039 13:03:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:35.039 13:03:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:35.039 lcov: LCOV version 1.15 00:09:35.040 13:03:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:53.199 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:53.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:11.276 13:04:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:10:11.276 13:04:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.276 13:04:14 -- common/autotest_common.sh@10 -- # set +x 00:10:11.276 13:04:14 -- spdk/autotest.sh@78 -- # rm -f 00:10:11.276 13:04:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.276 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:11.276 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:11.276 13:04:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:10:11.276 13:04:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:11.276 13:04:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:11.276 13:04:15 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:11.276 13:04:15 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:11.276 13:04:15 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:11.276 13:04:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:11.276 13:04:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:11.276 13:04:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:11.276 13:04:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:11.276 13:04:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:11.276 13:04:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:11.276 13:04:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:10:11.276 13:04:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:10:11.276 13:04:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:11.276 13:04:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:10:11.276 13:04:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:10:11.276 13:04:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:11.276 13:04:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:11.276 13:04:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:10:11.276 13:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:11.276 13:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:11.276 13:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:10:11.276 13:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:10:11.276 13:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:11.276 No valid GPT data, bailing 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # pt= 00:10:11.276 13:04:15 -- scripts/common.sh@395 -- # return 1 00:10:11.276 13:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:11.276 1+0 records in 00:10:11.276 1+0 records out 00:10:11.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577924 s, 181 MB/s 00:10:11.276 13:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:11.276 13:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:11.276 13:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:10:11.276 13:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:10:11.276 13:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:11.276 No valid GPT data, bailing 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # pt= 00:10:11.276 13:04:15 -- scripts/common.sh@395 -- # return 1 00:10:11.276 13:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:11.276 1+0 records in 00:10:11.276 1+0 records out 00:10:11.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549922 s, 191 MB/s 00:10:11.276 13:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:11.276 13:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:11.276 13:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:10:11.276 13:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:10:11.276 13:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:11.276 No valid GPT data, bailing 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:11.276 13:04:15 -- scripts/common.sh@394 -- # pt= 00:10:11.276 13:04:15 -- scripts/common.sh@395 -- # return 1 00:10:11.276 13:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:11.276 1+0 records in 00:10:11.276 1+0 records out 00:10:11.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500982 s, 209 MB/s 00:10:11.277 13:04:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:10:11.277 13:04:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:10:11.277 13:04:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:10:11.277 13:04:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:10:11.277 13:04:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:11.277 No valid GPT data, bailing 00:10:11.277 13:04:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:11.277 13:04:15 -- scripts/common.sh@394 -- # pt= 00:10:11.277 13:04:15 -- scripts/common.sh@395 -- # return 1 00:10:11.277 13:04:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:11.277 1+0 records in 00:10:11.277 1+0 records out 00:10:11.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417534 s, 251 MB/s 00:10:11.277 13:04:15 -- spdk/autotest.sh@105 -- # sync 00:10:11.277 13:04:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:11.277 13:04:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:11.277 13:04:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:11.536 13:04:17 -- spdk/autotest.sh@111 -- # uname -s 00:10:11.536 13:04:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:10:11.536 13:04:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:10:11.536 13:04:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:12.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.362 Hugepages 00:10:12.362 node hugesize free / total 00:10:12.362 node0 1048576kB 0 / 0 00:10:12.362 node0 2048kB 0 / 0 00:10:12.362 00:10:12.362 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:12.362 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:12.362 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:10:12.362 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:10:12.362 13:04:18 -- spdk/autotest.sh@117 -- # uname -s 00:10:12.362 13:04:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:10:12.362 13:04:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:10:12.621 13:04:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:13.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.449 13:04:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:10:14.385 13:04:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:10:14.385 13:04:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:10:14.385 13:04:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:14.385 13:04:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:14.385 13:04:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:14.385 13:04:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:14.385 13:04:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:14.386 13:04:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:14.386 13:04:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:14.386 13:04:20 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:14.386 13:04:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:14.386 13:04:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:14.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.974 Waiting for block devices as requested 00:10:14.974 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.974 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.974 13:04:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:14.974 13:04:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:14.974 13:04:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:10:14.974 13:04:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:14.974 13:04:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:14.974 13:04:21 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:14.974 13:04:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:14.974 13:04:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:14.974 13:04:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:14.974 13:04:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:14.974 13:04:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:14.974 13:04:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:14.974 13:04:21 -- common/autotest_common.sh@1543 -- # continue 00:10:14.974 13:04:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:10:14.974 13:04:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:10:14.974 13:04:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:14.974 13:04:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:10:14.974 13:04:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:10:15.233 13:04:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:10:15.233 13:04:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:10:15.233 13:04:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:10:15.233 13:04:21 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:10:15.233 13:04:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:10:15.233 13:04:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:10:15.233 13:04:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:10:15.233 13:04:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:10:15.233 13:04:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:10:15.233 13:04:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:10:15.233 13:04:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:10:15.233 13:04:21 -- common/autotest_common.sh@1543 -- # continue 00:10:15.233 13:04:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:10:15.233 13:04:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.233 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 13:04:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:10:15.233 13:04:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.233 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 13:04:21 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.059 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.059 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.059 13:04:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:10:16.059 13:04:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.059 13:04:22 -- common/autotest_common.sh@10 -- # set +x 00:10:16.059 13:04:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:10:16.059 13:04:22 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:10:16.059 13:04:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:10:16.059 13:04:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:10:16.059 13:04:22 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:10:16.059 13:04:22 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:10:16.059 13:04:22 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:10:16.059 13:04:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:10:16.059 13:04:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:16.059 13:04:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:10:16.059 13:04:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:16.059 13:04:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:16.059 13:04:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:16.317 13:04:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:10:16.317 13:04:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:16.317 13:04:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:16.317 13:04:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:16.317 13:04:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:16.317 13:04:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:16.318 13:04:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:10:16.318 13:04:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:16.318 13:04:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:10:16.318 13:04:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:16.318 13:04:22 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:10:16.318 13:04:22 -- common/autotest_common.sh@1572 -- # return 0 00:10:16.318 13:04:22 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:10:16.318 13:04:22 -- common/autotest_common.sh@1580 -- # return 0 00:10:16.318 13:04:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:10:16.318 13:04:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:10:16.318 13:04:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:16.318 13:04:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:10:16.318 13:04:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:10:16.318 13:04:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.318 13:04:22 -- common/autotest_common.sh@10 -- # set +x 00:10:16.318 13:04:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:10:16.318 13:04:22 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:16.318 13:04:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.318 13:04:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.318 13:04:22 -- common/autotest_common.sh@10 -- # set +x 00:10:16.318 ************************************ 00:10:16.318 START TEST env 00:10:16.318 ************************************ 00:10:16.318 13:04:22 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:16.318 * Looking for test storage... 00:10:16.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:16.318 13:04:22 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.318 13:04:22 env -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.318 13:04:22 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.318 13:04:22 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.318 13:04:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.318 13:04:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.318 13:04:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.318 13:04:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.318 13:04:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.318 13:04:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.318 13:04:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.318 13:04:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.318 13:04:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.318 13:04:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.318 13:04:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.318 13:04:22 env -- scripts/common.sh@344 -- # case "$op" in 00:10:16.318 13:04:22 env -- scripts/common.sh@345 -- # : 1 00:10:16.318 13:04:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.318 13:04:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.318 13:04:22 env -- scripts/common.sh@365 -- # decimal 1 00:10:16.318 13:04:22 env -- scripts/common.sh@353 -- # local d=1 00:10:16.318 13:04:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.318 13:04:22 env -- scripts/common.sh@355 -- # echo 1 00:10:16.577 13:04:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.577 13:04:22 env -- scripts/common.sh@366 -- # decimal 2 00:10:16.577 13:04:22 env -- scripts/common.sh@353 -- # local d=2 00:10:16.577 13:04:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.577 13:04:22 env -- scripts/common.sh@355 -- # echo 2 00:10:16.577 13:04:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.577 13:04:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.577 13:04:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.577 13:04:22 env -- scripts/common.sh@368 -- # return 0 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.577 --rc genhtml_branch_coverage=1 00:10:16.577 --rc genhtml_function_coverage=1 00:10:16.577 --rc genhtml_legend=1 00:10:16.577 --rc geninfo_all_blocks=1 00:10:16.577 --rc geninfo_unexecuted_blocks=1 00:10:16.577 00:10:16.577 ' 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.577 --rc genhtml_branch_coverage=1 00:10:16.577 --rc genhtml_function_coverage=1 00:10:16.577 --rc genhtml_legend=1 00:10:16.577 --rc geninfo_all_blocks=1 00:10:16.577 --rc geninfo_unexecuted_blocks=1 00:10:16.577 00:10:16.577 ' 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.577 --rc genhtml_branch_coverage=1 00:10:16.577 --rc genhtml_function_coverage=1 00:10:16.577 --rc genhtml_legend=1 00:10:16.577 --rc geninfo_all_blocks=1 00:10:16.577 --rc geninfo_unexecuted_blocks=1 00:10:16.577 00:10:16.577 ' 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.577 --rc genhtml_branch_coverage=1 00:10:16.577 --rc genhtml_function_coverage=1 00:10:16.577 --rc genhtml_legend=1 00:10:16.577 --rc geninfo_all_blocks=1 00:10:16.577 --rc geninfo_unexecuted_blocks=1 00:10:16.577 00:10:16.577 ' 00:10:16.577 13:04:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.577 13:04:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.577 13:04:22 env -- common/autotest_common.sh@10 -- # set +x 00:10:16.577 ************************************ 00:10:16.577 START TEST env_memory 00:10:16.577 ************************************ 00:10:16.577 13:04:22 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:16.577 00:10:16.577 00:10:16.577 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.577 http://cunit.sourceforge.net/ 00:10:16.577 00:10:16.577 00:10:16.577 Suite: memory 00:10:16.577 Test: alloc and free memory map ...[2024-12-06 13:04:22.940637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:16.577 passed 00:10:16.577 Test: mem map translation ...[2024-12-06 13:04:23.002973] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:16.577 [2024-12-06 13:04:23.003117] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:16.577 [2024-12-06 13:04:23.003221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:16.577 [2024-12-06 13:04:23.003252] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:16.577 passed 00:10:16.577 Test: mem map registration ...[2024-12-06 13:04:23.102946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:10:16.577 [2024-12-06 13:04:23.103130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:10:16.836 passed 00:10:16.836 Test: mem map adjacent registrations ...passed 00:10:16.836 00:10:16.836 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.836 suites 1 1 n/a 0 0 00:10:16.836 tests 4 4 4 0 0 00:10:16.836 asserts 152 152 152 0 n/a 00:10:16.836 00:10:16.836 Elapsed time = 0.352 seconds 00:10:16.836 00:10:16.836 real 0m0.400s 00:10:16.836 user 0m0.359s 00:10:16.836 sys 0m0.032s 00:10:16.836 13:04:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.836 13:04:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:16.836 ************************************ 00:10:16.836 END TEST env_memory 00:10:16.836 ************************************ 00:10:16.836 13:04:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:16.836 13:04:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.836 13:04:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.836 13:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:10:16.836 ************************************ 00:10:16.836 START TEST env_vtophys 00:10:16.836 ************************************ 00:10:16.836 13:04:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:17.095 EAL: lib.eal log level changed from notice to debug 00:10:17.095 EAL: Detected lcore 0 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 1 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 2 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 3 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 4 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 5 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 6 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 7 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 8 as core 0 on socket 0 00:10:17.095 EAL: Detected lcore 9 as core 0 on socket 0 00:10:17.095 EAL: Maximum logical cores by configuration: 128 00:10:17.095 EAL: Detected CPU lcores: 10 00:10:17.095 EAL: Detected NUMA nodes: 1 00:10:17.095 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:10:17.095 EAL: Detected shared linkage of DPDK 00:10:17.095 EAL: No shared files mode enabled, IPC will be disabled 00:10:17.095 EAL: Selected IOVA mode 'PA' 00:10:17.095 EAL: Probing VFIO support... 00:10:17.095 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:17.095 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:17.095 EAL: Ask a virtual area of 0x2e000 bytes 00:10:17.095 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:17.095 EAL: Setting up physically contiguous memory... 00:10:17.095 EAL: Setting maximum number of open files to 524288 00:10:17.095 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:17.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:17.095 EAL: Ask a virtual area of 0x61000 bytes 00:10:17.095 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:17.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:17.095 EAL: Ask a virtual area of 0x400000000 bytes 00:10:17.095 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:17.095 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:17.095 EAL: Ask a virtual area of 0x61000 bytes 00:10:17.095 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:17.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:17.095 EAL: Ask a virtual area of 0x400000000 bytes 00:10:17.095 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:17.095 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:17.095 EAL: Ask a virtual area of 0x61000 bytes 00:10:17.095 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:17.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:17.095 EAL: Ask a virtual area of 0x400000000 bytes 00:10:17.095 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:17.095 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:17.095 EAL: Ask a virtual area of 0x61000 bytes 00:10:17.095 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:17.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:17.095 EAL: Ask a virtual area of 0x400000000 bytes 00:10:17.095 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:17.095 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:17.095 EAL: Hugepages will be freed exactly as allocated. 00:10:17.095 EAL: No shared files mode enabled, IPC is disabled 00:10:17.095 EAL: No shared files mode enabled, IPC is disabled 00:10:17.095 EAL: TSC frequency is ~2200000 KHz 00:10:17.095 EAL: Main lcore 0 is ready (tid=7fb5d7e19a40;cpuset=[0]) 00:10:17.095 EAL: Trying to obtain current memory policy. 00:10:17.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.095 EAL: Restoring previous memory policy: 0 00:10:17.095 EAL: request: mp_malloc_sync 00:10:17.095 EAL: No shared files mode enabled, IPC is disabled 00:10:17.095 EAL: Heap on socket 0 was expanded by 2MB 00:10:17.095 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:17.095 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:17.095 EAL: Mem event callback 'spdk:(nil)' registered 00:10:17.095 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:17.095 00:10:17.095 00:10:17.095 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.095 http://cunit.sourceforge.net/ 00:10:17.095 00:10:17.095 00:10:17.095 Suite: components_suite 00:10:17.663 Test: vtophys_malloc_test ...passed 00:10:17.663 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:17.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.663 EAL: Restoring previous memory policy: 4 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was expanded by 4MB 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was shrunk by 4MB 00:10:17.663 EAL: Trying to obtain current memory policy. 00:10:17.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.663 EAL: Restoring previous memory policy: 4 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was expanded by 6MB 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was shrunk by 6MB 00:10:17.663 EAL: Trying to obtain current memory policy. 00:10:17.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.663 EAL: Restoring previous memory policy: 4 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was expanded by 10MB 00:10:17.663 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.663 EAL: request: mp_malloc_sync 00:10:17.663 EAL: No shared files mode enabled, IPC is disabled 00:10:17.663 EAL: Heap on socket 0 was shrunk by 10MB 00:10:17.663 EAL: Trying to obtain current memory policy. 00:10:17.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.664 EAL: Restoring previous memory policy: 4 00:10:17.664 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.664 EAL: request: mp_malloc_sync 00:10:17.664 EAL: No shared files mode enabled, IPC is disabled 00:10:17.664 EAL: Heap on socket 0 was expanded by 18MB 00:10:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.923 EAL: request: mp_malloc_sync 00:10:17.923 EAL: No shared files mode enabled, IPC is disabled 00:10:17.923 EAL: Heap on socket 0 was shrunk by 18MB 00:10:17.923 EAL: Trying to obtain current memory policy. 00:10:17.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.923 EAL: Restoring previous memory policy: 4 00:10:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.923 EAL: request: mp_malloc_sync 00:10:17.923 EAL: No shared files mode enabled, IPC is disabled 00:10:17.923 EAL: Heap on socket 0 was expanded by 34MB 00:10:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.923 EAL: request: mp_malloc_sync 00:10:17.923 EAL: No shared files mode enabled, IPC is disabled 00:10:17.923 EAL: Heap on socket 0 was shrunk by 34MB 00:10:17.923 EAL: Trying to obtain current memory policy. 00:10:17.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.923 EAL: Restoring previous memory policy: 4 00:10:17.923 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.923 EAL: request: mp_malloc_sync 00:10:17.923 EAL: No shared files mode enabled, IPC is disabled 00:10:17.923 EAL: Heap on socket 0 was expanded by 66MB 00:10:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.181 EAL: request: mp_malloc_sync 00:10:18.181 EAL: No shared files mode enabled, IPC is disabled 00:10:18.181 EAL: Heap on socket 0 was shrunk by 66MB 00:10:18.181 EAL: Trying to obtain current memory policy. 00:10:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:18.181 EAL: Restoring previous memory policy: 4 00:10:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.181 EAL: request: mp_malloc_sync 00:10:18.181 EAL: No shared files mode enabled, IPC is disabled 00:10:18.181 EAL: Heap on socket 0 was expanded by 130MB 00:10:18.440 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.440 EAL: request: mp_malloc_sync 00:10:18.440 EAL: No shared files mode enabled, IPC is disabled 00:10:18.440 EAL: Heap on socket 0 was shrunk by 130MB 00:10:18.698 EAL: Trying to obtain current memory policy. 00:10:18.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:18.698 EAL: Restoring previous memory policy: 4 00:10:18.698 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.698 EAL: request: mp_malloc_sync 00:10:18.698 EAL: No shared files mode enabled, IPC is disabled 00:10:18.698 EAL: Heap on socket 0 was expanded by 258MB 00:10:19.264 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.264 EAL: request: mp_malloc_sync 00:10:19.264 EAL: No shared files mode enabled, IPC is disabled 00:10:19.264 EAL: Heap on socket 0 was shrunk by 258MB 00:10:19.829 EAL: Trying to obtain current memory policy. 00:10:19.829 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.829 EAL: Restoring previous memory policy: 4 00:10:19.829 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.829 EAL: request: mp_malloc_sync 00:10:19.829 EAL: No shared files mode enabled, IPC is disabled 00:10:19.829 EAL: Heap on socket 0 was expanded by 514MB 00:10:20.763 EAL: Calling mem event callback 'spdk:(nil)' 00:10:21.021 EAL: request: mp_malloc_sync 00:10:21.021 EAL: No shared files mode enabled, IPC is disabled 00:10:21.021 EAL: Heap on socket 0 was shrunk by 514MB 00:10:21.588 EAL: Trying to obtain current memory policy. 00:10:21.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:22.154 EAL: Restoring previous memory policy: 4 00:10:22.154 EAL: Calling mem event callback 'spdk:(nil)' 00:10:22.154 EAL: request: mp_malloc_sync 00:10:22.154 EAL: No shared files mode enabled, IPC is disabled 00:10:22.154 EAL: Heap on socket 0 was expanded by 1026MB 00:10:24.089 EAL: Calling mem event callback 'spdk:(nil)' 00:10:24.089 EAL: request: mp_malloc_sync 00:10:24.089 EAL: No shared files mode enabled, IPC is disabled 00:10:24.089 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:25.466 passed 00:10:25.466 00:10:25.466 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.466 suites 1 1 n/a 0 0 00:10:25.466 tests 2 2 2 0 0 00:10:25.466 asserts 5691 5691 5691 0 n/a 00:10:25.466 00:10:25.466 Elapsed time = 8.155 seconds 00:10:25.466 EAL: Calling mem event callback 'spdk:(nil)' 00:10:25.466 EAL: request: mp_malloc_sync 00:10:25.466 EAL: No shared files mode enabled, IPC is disabled 00:10:25.466 EAL: Heap on socket 0 was shrunk by 2MB 00:10:25.466 EAL: No shared files mode enabled, IPC is disabled 00:10:25.466 EAL: No shared files mode enabled, IPC is disabled 00:10:25.466 EAL: No shared files mode enabled, IPC is disabled 00:10:25.466 00:10:25.466 real 0m8.527s 00:10:25.466 user 0m7.078s 00:10:25.466 sys 0m1.266s 00:10:25.466 13:04:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.466 13:04:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:25.466 ************************************ 00:10:25.466 END TEST env_vtophys 00:10:25.466 ************************************ 00:10:25.466 13:04:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:25.466 13:04:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.466 13:04:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.466 13:04:31 env -- common/autotest_common.sh@10 -- # set +x 00:10:25.466 ************************************ 00:10:25.466 START TEST env_pci 00:10:25.466 ************************************ 00:10:25.466 13:04:31 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:25.466 00:10:25.466 00:10:25.466 CUnit - A unit testing framework for C - Version 2.1-3 00:10:25.466 http://cunit.sourceforge.net/ 00:10:25.466 00:10:25.466 00:10:25.466 Suite: pci 00:10:25.467 Test: pci_hook ...[2024-12-06 13:04:31.935850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56829 has claimed it 00:10:25.467 passed 00:10:25.467 00:10:25.467 Run Summary: Type Total Ran Passed Failed Inactive 00:10:25.467 suites 1 1 n/a 0 0 00:10:25.467 tests 1 1 1 0 0 00:10:25.467 asserts 25 25 25 0 n/a 00:10:25.467 00:10:25.467 Elapsed time = 0.006 seconds 00:10:25.467 EAL: Cannot find device (10000:00:01.0) 00:10:25.467 EAL: Failed to attach device on primary process 00:10:25.467 00:10:25.467 real 0m0.074s 00:10:25.467 user 0m0.037s 00:10:25.467 sys 0m0.037s 00:10:25.467 13:04:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.467 13:04:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:25.467 ************************************ 00:10:25.467 END TEST env_pci 00:10:25.467 ************************************ 00:10:25.736 13:04:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:25.736 13:04:32 env -- env/env.sh@15 -- # uname 00:10:25.736 13:04:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:25.736 13:04:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:25.736 13:04:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:25.736 13:04:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.736 13:04:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.736 13:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:10:25.736 ************************************ 00:10:25.736 START TEST env_dpdk_post_init 00:10:25.736 ************************************ 00:10:25.736 13:04:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:25.736 EAL: Detected CPU lcores: 10 00:10:25.736 EAL: Detected NUMA nodes: 1 00:10:25.736 EAL: Detected shared linkage of DPDK 00:10:25.736 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:25.736 EAL: Selected IOVA mode 'PA' 00:10:25.736 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:25.995 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:25.995 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:25.995 Starting DPDK initialization... 00:10:25.995 Starting SPDK post initialization... 00:10:25.995 SPDK NVMe probe 00:10:25.995 Attaching to 0000:00:10.0 00:10:25.995 Attaching to 0000:00:11.0 00:10:25.995 Attached to 0000:00:10.0 00:10:25.995 Attached to 0000:00:11.0 00:10:25.995 Cleaning up... 00:10:25.995 00:10:25.995 real 0m0.298s 00:10:25.995 user 0m0.101s 00:10:25.995 sys 0m0.097s 00:10:25.995 13:04:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.995 13:04:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:25.995 ************************************ 00:10:25.995 END TEST env_dpdk_post_init 00:10:25.995 ************************************ 00:10:25.995 13:04:32 env -- env/env.sh@26 -- # uname 00:10:25.995 13:04:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:25.995 13:04:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:25.995 13:04:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.995 13:04:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.995 13:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:10:25.995 ************************************ 00:10:25.995 START TEST env_mem_callbacks 00:10:25.995 ************************************ 00:10:25.995 13:04:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:25.995 EAL: Detected CPU lcores: 10 00:10:25.995 EAL: Detected NUMA nodes: 1 00:10:25.995 EAL: Detected shared linkage of DPDK 00:10:25.995 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:25.995 EAL: Selected IOVA mode 'PA' 00:10:26.254 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:26.254 00:10:26.254 00:10:26.254 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.254 http://cunit.sourceforge.net/ 00:10:26.254 00:10:26.254 00:10:26.254 Suite: memory 00:10:26.254 Test: test ... 00:10:26.254 register 0x200000200000 2097152 00:10:26.254 malloc 3145728 00:10:26.254 register 0x200000400000 4194304 00:10:26.254 buf 0x2000004fffc0 len 3145728 PASSED 00:10:26.254 malloc 64 00:10:26.254 buf 0x2000004ffec0 len 64 PASSED 00:10:26.254 malloc 4194304 00:10:26.254 register 0x200000800000 6291456 00:10:26.254 buf 0x2000009fffc0 len 4194304 PASSED 00:10:26.254 free 0x2000004fffc0 3145728 00:10:26.254 free 0x2000004ffec0 64 00:10:26.254 unregister 0x200000400000 4194304 PASSED 00:10:26.254 free 0x2000009fffc0 4194304 00:10:26.254 unregister 0x200000800000 6291456 PASSED 00:10:26.254 malloc 8388608 00:10:26.254 register 0x200000400000 10485760 00:10:26.254 buf 0x2000005fffc0 len 8388608 PASSED 00:10:26.254 free 0x2000005fffc0 8388608 00:10:26.254 unregister 0x200000400000 10485760 PASSED 00:10:26.254 passed 00:10:26.254 00:10:26.254 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.254 suites 1 1 n/a 0 0 00:10:26.254 tests 1 1 1 0 0 00:10:26.254 asserts 15 15 15 0 n/a 00:10:26.254 00:10:26.254 Elapsed time = 0.075 seconds 00:10:26.254 00:10:26.254 real 0m0.283s 00:10:26.254 user 0m0.104s 00:10:26.254 sys 0m0.078s 00:10:26.254 13:04:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.254 13:04:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:26.254 ************************************ 00:10:26.254 END TEST env_mem_callbacks 00:10:26.254 ************************************ 00:10:26.254 ************************************ 00:10:26.254 END TEST env 00:10:26.254 ************************************ 00:10:26.254 00:10:26.254 real 0m10.073s 00:10:26.254 user 0m7.893s 00:10:26.254 sys 0m1.779s 00:10:26.254 13:04:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.254 13:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:10:26.254 13:04:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:26.254 13:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:26.254 13:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.254 13:04:32 -- common/autotest_common.sh@10 -- # set +x 00:10:26.254 ************************************ 00:10:26.254 START TEST rpc 00:10:26.254 ************************************ 00:10:26.254 13:04:32 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:26.512 * Looking for test storage... 00:10:26.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.512 13:04:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.512 13:04:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.512 13:04:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.512 13:04:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.512 13:04:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.512 13:04:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:26.512 13:04:32 rpc -- scripts/common.sh@345 -- # : 1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.512 13:04:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.512 13:04:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@353 -- # local d=1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.512 13:04:32 rpc -- scripts/common.sh@355 -- # echo 1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.512 13:04:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@353 -- # local d=2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.512 13:04:32 rpc -- scripts/common.sh@355 -- # echo 2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.512 13:04:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.512 13:04:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.512 13:04:32 rpc -- scripts/common.sh@368 -- # return 0 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.512 --rc genhtml_branch_coverage=1 00:10:26.512 --rc genhtml_function_coverage=1 00:10:26.512 --rc genhtml_legend=1 00:10:26.512 --rc geninfo_all_blocks=1 00:10:26.512 --rc geninfo_unexecuted_blocks=1 00:10:26.512 00:10:26.512 ' 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.512 --rc genhtml_branch_coverage=1 00:10:26.512 --rc genhtml_function_coverage=1 00:10:26.512 --rc genhtml_legend=1 00:10:26.512 --rc geninfo_all_blocks=1 00:10:26.512 --rc geninfo_unexecuted_blocks=1 00:10:26.512 00:10:26.512 ' 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.512 --rc genhtml_branch_coverage=1 00:10:26.512 --rc genhtml_function_coverage=1 00:10:26.512 --rc genhtml_legend=1 00:10:26.512 --rc geninfo_all_blocks=1 00:10:26.512 --rc geninfo_unexecuted_blocks=1 00:10:26.512 00:10:26.512 ' 00:10:26.512 13:04:32 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:26.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.512 --rc genhtml_branch_coverage=1 00:10:26.512 --rc genhtml_function_coverage=1 00:10:26.512 --rc genhtml_legend=1 00:10:26.512 --rc geninfo_all_blocks=1 00:10:26.512 --rc geninfo_unexecuted_blocks=1 00:10:26.512 00:10:26.512 ' 00:10:26.512 13:04:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56956 00:10:26.512 13:04:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.512 13:04:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:26.512 13:04:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56956 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 56956 ']' 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.513 13:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.771 [2024-12-06 13:04:33.109248] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:26.771 [2024-12-06 13:04:33.109911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56956 ] 00:10:27.030 [2024-12-06 13:04:33.313116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.030 [2024-12-06 13:04:33.484738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:27.030 [2024-12-06 13:04:33.484842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56956' to capture a snapshot of events at runtime. 00:10:27.030 [2024-12-06 13:04:33.484872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.030 [2024-12-06 13:04:33.484891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.030 [2024-12-06 13:04:33.484913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56956 for offline analysis/debug. 00:10:27.030 [2024-12-06 13:04:33.486695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.998 13:04:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.998 13:04:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:27.998 13:04:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:27.998 13:04:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:27.998 13:04:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:27.998 13:04:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:27.998 13:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.998 13:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.998 13:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.998 ************************************ 00:10:27.998 START TEST rpc_integrity 00:10:27.998 ************************************ 00:10:27.998 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:27.998 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.998 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.998 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:27.998 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.998 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:27.998 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:28.256 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:28.256 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:28.256 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.256 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:28.256 { 00:10:28.256 "name": "Malloc0", 00:10:28.256 "aliases": [ 00:10:28.257 "85f2b2dc-3517-4bc3-b17e-52e9f5272331" 00:10:28.257 ], 00:10:28.257 "product_name": "Malloc disk", 00:10:28.257 "block_size": 512, 00:10:28.257 "num_blocks": 16384, 00:10:28.257 "uuid": "85f2b2dc-3517-4bc3-b17e-52e9f5272331", 00:10:28.257 "assigned_rate_limits": { 00:10:28.257 "rw_ios_per_sec": 0, 00:10:28.257 "rw_mbytes_per_sec": 0, 00:10:28.257 "r_mbytes_per_sec": 0, 00:10:28.257 "w_mbytes_per_sec": 0 00:10:28.257 }, 00:10:28.257 "claimed": false, 00:10:28.257 "zoned": false, 00:10:28.257 "supported_io_types": { 00:10:28.257 "read": true, 00:10:28.257 "write": true, 00:10:28.257 "unmap": true, 00:10:28.257 "flush": true, 00:10:28.257 "reset": true, 00:10:28.257 "nvme_admin": false, 00:10:28.257 "nvme_io": false, 00:10:28.257 "nvme_io_md": false, 00:10:28.257 "write_zeroes": true, 00:10:28.257 "zcopy": true, 00:10:28.257 "get_zone_info": false, 00:10:28.257 "zone_management": false, 00:10:28.257 "zone_append": false, 00:10:28.257 "compare": false, 00:10:28.257 "compare_and_write": false, 00:10:28.257 "abort": true, 00:10:28.257 "seek_hole": false, 00:10:28.257 "seek_data": false, 00:10:28.257 "copy": true, 00:10:28.257 "nvme_iov_md": false 00:10:28.257 }, 00:10:28.257 "memory_domains": [ 00:10:28.257 { 00:10:28.257 "dma_device_id": "system", 00:10:28.257 "dma_device_type": 1 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.257 "dma_device_type": 2 00:10:28.257 } 00:10:28.257 ], 00:10:28.257 "driver_specific": {} 00:10:28.257 } 00:10:28.257 ]' 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.257 [2024-12-06 13:04:34.681818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:28.257 [2024-12-06 13:04:34.681942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.257 [2024-12-06 13:04:34.681978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:28.257 [2024-12-06 13:04:34.681999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.257 [2024-12-06 13:04:34.685628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.257 [2024-12-06 13:04:34.685683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:28.257 Passthru0 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:28.257 { 00:10:28.257 "name": "Malloc0", 00:10:28.257 "aliases": [ 00:10:28.257 "85f2b2dc-3517-4bc3-b17e-52e9f5272331" 00:10:28.257 ], 00:10:28.257 "product_name": "Malloc disk", 00:10:28.257 "block_size": 512, 00:10:28.257 "num_blocks": 16384, 00:10:28.257 "uuid": "85f2b2dc-3517-4bc3-b17e-52e9f5272331", 00:10:28.257 "assigned_rate_limits": { 00:10:28.257 "rw_ios_per_sec": 0, 00:10:28.257 "rw_mbytes_per_sec": 0, 00:10:28.257 "r_mbytes_per_sec": 0, 00:10:28.257 "w_mbytes_per_sec": 0 00:10:28.257 }, 00:10:28.257 "claimed": true, 00:10:28.257 "claim_type": "exclusive_write", 00:10:28.257 "zoned": false, 00:10:28.257 "supported_io_types": { 00:10:28.257 "read": true, 00:10:28.257 "write": true, 00:10:28.257 "unmap": true, 00:10:28.257 "flush": true, 00:10:28.257 "reset": true, 00:10:28.257 "nvme_admin": false, 00:10:28.257 "nvme_io": false, 00:10:28.257 "nvme_io_md": false, 00:10:28.257 "write_zeroes": true, 00:10:28.257 "zcopy": true, 00:10:28.257 "get_zone_info": false, 00:10:28.257 "zone_management": false, 00:10:28.257 "zone_append": false, 00:10:28.257 "compare": false, 00:10:28.257 "compare_and_write": false, 00:10:28.257 "abort": true, 00:10:28.257 "seek_hole": false, 00:10:28.257 "seek_data": false, 00:10:28.257 "copy": true, 00:10:28.257 "nvme_iov_md": false 00:10:28.257 }, 00:10:28.257 "memory_domains": [ 00:10:28.257 { 00:10:28.257 "dma_device_id": "system", 00:10:28.257 "dma_device_type": 1 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.257 "dma_device_type": 2 00:10:28.257 } 00:10:28.257 ], 00:10:28.257 "driver_specific": {} 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "name": "Passthru0", 00:10:28.257 "aliases": [ 00:10:28.257 "5ad528cf-5122-5dfb-8239-4892d3cd579e" 00:10:28.257 ], 00:10:28.257 "product_name": "passthru", 00:10:28.257 "block_size": 512, 00:10:28.257 "num_blocks": 16384, 00:10:28.257 "uuid": "5ad528cf-5122-5dfb-8239-4892d3cd579e", 00:10:28.257 "assigned_rate_limits": { 00:10:28.257 "rw_ios_per_sec": 0, 00:10:28.257 "rw_mbytes_per_sec": 0, 00:10:28.257 "r_mbytes_per_sec": 0, 00:10:28.257 "w_mbytes_per_sec": 0 00:10:28.257 }, 00:10:28.257 "claimed": false, 00:10:28.257 "zoned": false, 00:10:28.257 "supported_io_types": { 00:10:28.257 "read": true, 00:10:28.257 "write": true, 00:10:28.257 "unmap": true, 00:10:28.257 "flush": true, 00:10:28.257 "reset": true, 00:10:28.257 "nvme_admin": false, 00:10:28.257 "nvme_io": false, 00:10:28.257 "nvme_io_md": false, 00:10:28.257 "write_zeroes": true, 00:10:28.257 "zcopy": true, 00:10:28.257 "get_zone_info": false, 00:10:28.257 "zone_management": false, 00:10:28.257 "zone_append": false, 00:10:28.257 "compare": false, 00:10:28.257 "compare_and_write": false, 00:10:28.257 "abort": true, 00:10:28.257 "seek_hole": false, 00:10:28.257 "seek_data": false, 00:10:28.257 "copy": true, 00:10:28.257 "nvme_iov_md": false 00:10:28.257 }, 00:10:28.257 "memory_domains": [ 00:10:28.257 { 00:10:28.257 "dma_device_id": "system", 00:10:28.257 "dma_device_type": 1 00:10:28.257 }, 00:10:28.257 { 00:10:28.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.257 "dma_device_type": 2 00:10:28.257 } 00:10:28.257 ], 00:10:28.257 "driver_specific": { 00:10:28.257 "passthru": { 00:10:28.257 "name": "Passthru0", 00:10:28.257 "base_bdev_name": "Malloc0" 00:10:28.257 } 00:10:28.257 } 00:10:28.257 } 00:10:28.257 ]' 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.257 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.257 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:28.258 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.258 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.516 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.516 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:28.516 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:28.516 ************************************ 00:10:28.516 END TEST rpc_integrity 00:10:28.516 ************************************ 00:10:28.516 13:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:28.516 00:10:28.516 real 0m0.363s 00:10:28.516 user 0m0.213s 00:10:28.516 sys 0m0.046s 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.516 13:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:28.516 13:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.516 13:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.516 13:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 ************************************ 00:10:28.516 START TEST rpc_plugins 00:10:28.516 ************************************ 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:28.516 13:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.516 13:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:28.516 13:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.516 13:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:28.516 { 00:10:28.516 "name": "Malloc1", 00:10:28.516 "aliases": [ 00:10:28.516 "b9ea7b3e-0a4e-49f0-adc0-f525f517a2d9" 00:10:28.516 ], 00:10:28.516 "product_name": "Malloc disk", 00:10:28.516 "block_size": 4096, 00:10:28.516 "num_blocks": 256, 00:10:28.516 "uuid": "b9ea7b3e-0a4e-49f0-adc0-f525f517a2d9", 00:10:28.516 "assigned_rate_limits": { 00:10:28.516 "rw_ios_per_sec": 0, 00:10:28.516 "rw_mbytes_per_sec": 0, 00:10:28.516 "r_mbytes_per_sec": 0, 00:10:28.516 "w_mbytes_per_sec": 0 00:10:28.516 }, 00:10:28.516 "claimed": false, 00:10:28.516 "zoned": false, 00:10:28.516 "supported_io_types": { 00:10:28.516 "read": true, 00:10:28.516 "write": true, 00:10:28.516 "unmap": true, 00:10:28.516 "flush": true, 00:10:28.516 "reset": true, 00:10:28.516 "nvme_admin": false, 00:10:28.516 "nvme_io": false, 00:10:28.516 "nvme_io_md": false, 00:10:28.516 "write_zeroes": true, 00:10:28.516 "zcopy": true, 00:10:28.516 "get_zone_info": false, 00:10:28.516 "zone_management": false, 00:10:28.516 "zone_append": false, 00:10:28.516 "compare": false, 00:10:28.516 "compare_and_write": false, 00:10:28.516 "abort": true, 00:10:28.516 "seek_hole": false, 00:10:28.516 "seek_data": false, 00:10:28.516 "copy": true, 00:10:28.516 "nvme_iov_md": false 00:10:28.516 }, 00:10:28.516 "memory_domains": [ 00:10:28.516 { 00:10:28.516 "dma_device_id": "system", 00:10:28.516 "dma_device_type": 1 00:10:28.516 }, 00:10:28.516 { 00:10:28.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.516 "dma_device_type": 2 00:10:28.516 } 00:10:28.516 ], 00:10:28.516 "driver_specific": {} 00:10:28.516 } 00:10:28.516 ]' 00:10:28.516 13:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:28.516 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:28.516 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:28.516 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.516 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.516 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:28.516 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.516 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.777 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:28.777 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:28.777 ************************************ 00:10:28.777 END TEST rpc_plugins 00:10:28.777 ************************************ 00:10:28.777 13:04:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:28.777 00:10:28.777 real 0m0.182s 00:10:28.777 user 0m0.117s 00:10:28.777 sys 0m0.022s 00:10:28.777 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.777 13:04:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 13:04:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:28.777 13:04:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.777 13:04:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.777 13:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 ************************************ 00:10:28.777 START TEST rpc_trace_cmd_test 00:10:28.777 ************************************ 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:28.777 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56956", 00:10:28.777 "tpoint_group_mask": "0x8", 00:10:28.777 "iscsi_conn": { 00:10:28.777 "mask": "0x2", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "scsi": { 00:10:28.777 "mask": "0x4", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "bdev": { 00:10:28.777 "mask": "0x8", 00:10:28.777 "tpoint_mask": "0xffffffffffffffff" 00:10:28.777 }, 00:10:28.777 "nvmf_rdma": { 00:10:28.777 "mask": "0x10", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "nvmf_tcp": { 00:10:28.777 "mask": "0x20", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "ftl": { 00:10:28.777 "mask": "0x40", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "blobfs": { 00:10:28.777 "mask": "0x80", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "dsa": { 00:10:28.777 "mask": "0x200", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "thread": { 00:10:28.777 "mask": "0x400", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "nvme_pcie": { 00:10:28.777 "mask": "0x800", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "iaa": { 00:10:28.777 "mask": "0x1000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "nvme_tcp": { 00:10:28.777 "mask": "0x2000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "bdev_nvme": { 00:10:28.777 "mask": "0x4000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "sock": { 00:10:28.777 "mask": "0x8000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "blob": { 00:10:28.777 "mask": "0x10000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "bdev_raid": { 00:10:28.777 "mask": "0x20000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 }, 00:10:28.777 "scheduler": { 00:10:28.777 "mask": "0x40000", 00:10:28.777 "tpoint_mask": "0x0" 00:10:28.777 } 00:10:28.777 }' 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:28.777 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:29.035 ************************************ 00:10:29.035 END TEST rpc_trace_cmd_test 00:10:29.035 ************************************ 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:29.035 00:10:29.035 real 0m0.276s 00:10:29.035 user 0m0.231s 00:10:29.035 sys 0m0.033s 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.035 13:04:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.035 13:04:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:29.035 13:04:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:29.035 13:04:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:29.035 13:04:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.035 13:04:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.035 13:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.035 ************************************ 00:10:29.035 START TEST rpc_daemon_integrity 00:10:29.035 ************************************ 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.035 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:29.294 { 00:10:29.294 "name": "Malloc2", 00:10:29.294 "aliases": [ 00:10:29.294 "b1700138-d30f-4c56-a9eb-c5c8b3efcbef" 00:10:29.294 ], 00:10:29.294 "product_name": "Malloc disk", 00:10:29.294 "block_size": 512, 00:10:29.294 "num_blocks": 16384, 00:10:29.294 "uuid": "b1700138-d30f-4c56-a9eb-c5c8b3efcbef", 00:10:29.294 "assigned_rate_limits": { 00:10:29.294 "rw_ios_per_sec": 0, 00:10:29.294 "rw_mbytes_per_sec": 0, 00:10:29.294 "r_mbytes_per_sec": 0, 00:10:29.294 "w_mbytes_per_sec": 0 00:10:29.294 }, 00:10:29.294 "claimed": false, 00:10:29.294 "zoned": false, 00:10:29.294 "supported_io_types": { 00:10:29.294 "read": true, 00:10:29.294 "write": true, 00:10:29.294 "unmap": true, 00:10:29.294 "flush": true, 00:10:29.294 "reset": true, 00:10:29.294 "nvme_admin": false, 00:10:29.294 "nvme_io": false, 00:10:29.294 "nvme_io_md": false, 00:10:29.294 "write_zeroes": true, 00:10:29.294 "zcopy": true, 00:10:29.294 "get_zone_info": false, 00:10:29.294 "zone_management": false, 00:10:29.294 "zone_append": false, 00:10:29.294 "compare": false, 00:10:29.294 "compare_and_write": false, 00:10:29.294 "abort": true, 00:10:29.294 "seek_hole": false, 00:10:29.294 "seek_data": false, 00:10:29.294 "copy": true, 00:10:29.294 "nvme_iov_md": false 00:10:29.294 }, 00:10:29.294 "memory_domains": [ 00:10:29.294 { 00:10:29.294 "dma_device_id": "system", 00:10:29.294 "dma_device_type": 1 00:10:29.294 }, 00:10:29.294 { 00:10:29.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.294 "dma_device_type": 2 00:10:29.294 } 00:10:29.294 ], 00:10:29.294 "driver_specific": {} 00:10:29.294 } 00:10:29.294 ]' 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.294 [2024-12-06 13:04:35.652153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:29.294 [2024-12-06 13:04:35.652226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.294 [2024-12-06 13:04:35.652259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:29.294 [2024-12-06 13:04:35.652278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.294 [2024-12-06 13:04:35.655431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.294 [2024-12-06 13:04:35.655523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:29.294 Passthru0 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.294 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:29.294 { 00:10:29.294 "name": "Malloc2", 00:10:29.294 "aliases": [ 00:10:29.294 "b1700138-d30f-4c56-a9eb-c5c8b3efcbef" 00:10:29.294 ], 00:10:29.294 "product_name": "Malloc disk", 00:10:29.294 "block_size": 512, 00:10:29.294 "num_blocks": 16384, 00:10:29.294 "uuid": "b1700138-d30f-4c56-a9eb-c5c8b3efcbef", 00:10:29.294 "assigned_rate_limits": { 00:10:29.294 "rw_ios_per_sec": 0, 00:10:29.294 "rw_mbytes_per_sec": 0, 00:10:29.294 "r_mbytes_per_sec": 0, 00:10:29.294 "w_mbytes_per_sec": 0 00:10:29.294 }, 00:10:29.294 "claimed": true, 00:10:29.294 "claim_type": "exclusive_write", 00:10:29.294 "zoned": false, 00:10:29.294 "supported_io_types": { 00:10:29.294 "read": true, 00:10:29.294 "write": true, 00:10:29.294 "unmap": true, 00:10:29.294 "flush": true, 00:10:29.294 "reset": true, 00:10:29.294 "nvme_admin": false, 00:10:29.294 "nvme_io": false, 00:10:29.294 "nvme_io_md": false, 00:10:29.294 "write_zeroes": true, 00:10:29.294 "zcopy": true, 00:10:29.294 "get_zone_info": false, 00:10:29.294 "zone_management": false, 00:10:29.294 "zone_append": false, 00:10:29.294 "compare": false, 00:10:29.295 "compare_and_write": false, 00:10:29.295 "abort": true, 00:10:29.295 "seek_hole": false, 00:10:29.295 "seek_data": false, 00:10:29.295 "copy": true, 00:10:29.295 "nvme_iov_md": false 00:10:29.295 }, 00:10:29.295 "memory_domains": [ 00:10:29.295 { 00:10:29.295 "dma_device_id": "system", 00:10:29.295 "dma_device_type": 1 00:10:29.295 }, 00:10:29.295 { 00:10:29.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.295 "dma_device_type": 2 00:10:29.295 } 00:10:29.295 ], 00:10:29.295 "driver_specific": {} 00:10:29.295 }, 00:10:29.295 { 00:10:29.295 "name": "Passthru0", 00:10:29.295 "aliases": [ 00:10:29.295 "a637dd87-638f-5bff-8cd6-ed67bdb2839a" 00:10:29.295 ], 00:10:29.295 "product_name": "passthru", 00:10:29.295 "block_size": 512, 00:10:29.295 "num_blocks": 16384, 00:10:29.295 "uuid": "a637dd87-638f-5bff-8cd6-ed67bdb2839a", 00:10:29.295 "assigned_rate_limits": { 00:10:29.295 "rw_ios_per_sec": 0, 00:10:29.295 "rw_mbytes_per_sec": 0, 00:10:29.295 "r_mbytes_per_sec": 0, 00:10:29.295 "w_mbytes_per_sec": 0 00:10:29.295 }, 00:10:29.295 "claimed": false, 00:10:29.295 "zoned": false, 00:10:29.295 "supported_io_types": { 00:10:29.295 "read": true, 00:10:29.295 "write": true, 00:10:29.295 "unmap": true, 00:10:29.295 "flush": true, 00:10:29.295 "reset": true, 00:10:29.295 "nvme_admin": false, 00:10:29.295 "nvme_io": false, 00:10:29.295 "nvme_io_md": false, 00:10:29.295 "write_zeroes": true, 00:10:29.295 "zcopy": true, 00:10:29.295 "get_zone_info": false, 00:10:29.295 "zone_management": false, 00:10:29.295 "zone_append": false, 00:10:29.295 "compare": false, 00:10:29.295 "compare_and_write": false, 00:10:29.295 "abort": true, 00:10:29.295 "seek_hole": false, 00:10:29.295 "seek_data": false, 00:10:29.295 "copy": true, 00:10:29.295 "nvme_iov_md": false 00:10:29.295 }, 00:10:29.295 "memory_domains": [ 00:10:29.295 { 00:10:29.295 "dma_device_id": "system", 00:10:29.295 "dma_device_type": 1 00:10:29.295 }, 00:10:29.295 { 00:10:29.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.295 "dma_device_type": 2 00:10:29.295 } 00:10:29.295 ], 00:10:29.295 "driver_specific": { 00:10:29.295 "passthru": { 00:10:29.295 "name": "Passthru0", 00:10:29.295 "base_bdev_name": "Malloc2" 00:10:29.295 } 00:10:29.295 } 00:10:29.295 } 00:10:29.295 ]' 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:29.295 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:29.554 ************************************ 00:10:29.554 END TEST rpc_daemon_integrity 00:10:29.554 ************************************ 00:10:29.554 13:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:29.554 00:10:29.554 real 0m0.363s 00:10:29.554 user 0m0.219s 00:10:29.554 sys 0m0.050s 00:10:29.554 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.554 13:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:29.554 13:04:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:29.554 13:04:35 rpc -- rpc/rpc.sh@84 -- # killprocess 56956 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 56956 ']' 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@958 -- # kill -0 56956 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@959 -- # uname 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56956 00:10:29.554 killing process with pid 56956 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56956' 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@973 -- # kill 56956 00:10:29.554 13:04:35 rpc -- common/autotest_common.sh@978 -- # wait 56956 00:10:32.108 00:10:32.108 real 0m5.473s 00:10:32.108 user 0m6.058s 00:10:32.108 sys 0m1.071s 00:10:32.108 13:04:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.108 ************************************ 00:10:32.108 END TEST rpc 00:10:32.108 ************************************ 00:10:32.108 13:04:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 13:04:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:32.108 13:04:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.108 13:04:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.108 13:04:38 -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 ************************************ 00:10:32.108 START TEST skip_rpc 00:10:32.108 ************************************ 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:32.108 * Looking for test storage... 00:10:32.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.108 13:04:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.108 --rc genhtml_branch_coverage=1 00:10:32.108 --rc genhtml_function_coverage=1 00:10:32.108 --rc genhtml_legend=1 00:10:32.108 --rc geninfo_all_blocks=1 00:10:32.108 --rc geninfo_unexecuted_blocks=1 00:10:32.108 00:10:32.108 ' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.108 --rc genhtml_branch_coverage=1 00:10:32.108 --rc genhtml_function_coverage=1 00:10:32.108 --rc genhtml_legend=1 00:10:32.108 --rc geninfo_all_blocks=1 00:10:32.108 --rc geninfo_unexecuted_blocks=1 00:10:32.108 00:10:32.108 ' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.108 --rc genhtml_branch_coverage=1 00:10:32.108 --rc genhtml_function_coverage=1 00:10:32.108 --rc genhtml_legend=1 00:10:32.108 --rc geninfo_all_blocks=1 00:10:32.108 --rc geninfo_unexecuted_blocks=1 00:10:32.108 00:10:32.108 ' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.108 --rc genhtml_branch_coverage=1 00:10:32.108 --rc genhtml_function_coverage=1 00:10:32.108 --rc genhtml_legend=1 00:10:32.108 --rc geninfo_all_blocks=1 00:10:32.108 --rc geninfo_unexecuted_blocks=1 00:10:32.108 00:10:32.108 ' 00:10:32.108 13:04:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:32.108 13:04:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:32.108 13:04:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.108 13:04:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.108 ************************************ 00:10:32.108 START TEST skip_rpc 00:10:32.108 ************************************ 00:10:32.108 13:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:32.108 13:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57190 00:10:32.108 13:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:32.108 13:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:32.108 13:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:32.108 [2024-12-06 13:04:38.620122] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:32.108 [2024-12-06 13:04:38.620501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57190 ] 00:10:32.367 [2024-12-06 13:04:38.797620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.626 [2024-12-06 13:04:38.929195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57190 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57190 ']' 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57190 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57190 00:10:37.896 killing process with pid 57190 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57190' 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57190 00:10:37.896 13:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57190 00:10:39.798 ************************************ 00:10:39.798 END TEST skip_rpc 00:10:39.798 ************************************ 00:10:39.798 00:10:39.798 real 0m7.354s 00:10:39.798 user 0m6.725s 00:10:39.798 sys 0m0.528s 00:10:39.798 13:04:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.798 13:04:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.798 13:04:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:39.798 13:04:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.798 13:04:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.798 13:04:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.798 ************************************ 00:10:39.798 START TEST skip_rpc_with_json 00:10:39.798 ************************************ 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57299 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57299 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57299 ']' 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.798 13:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:39.798 [2024-12-06 13:04:46.069805] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:39.798 [2024-12-06 13:04:46.070491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57299 ] 00:10:39.798 [2024-12-06 13:04:46.267291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.057 [2024-12-06 13:04:46.428024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:40.991 [2024-12-06 13:04:47.351129] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:40.991 request: 00:10:40.991 { 00:10:40.991 "trtype": "tcp", 00:10:40.991 "method": "nvmf_get_transports", 00:10:40.991 "req_id": 1 00:10:40.991 } 00:10:40.991 Got JSON-RPC error response 00:10:40.991 response: 00:10:40.991 { 00:10:40.991 "code": -19, 00:10:40.991 "message": "No such device" 00:10:40.991 } 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:40.991 [2024-12-06 13:04:47.363303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.991 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:41.250 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.250 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:41.250 { 00:10:41.250 "subsystems": [ 00:10:41.250 { 00:10:41.250 "subsystem": "fsdev", 00:10:41.250 "config": [ 00:10:41.250 { 00:10:41.250 "method": "fsdev_set_opts", 00:10:41.250 "params": { 00:10:41.250 "fsdev_io_pool_size": 65535, 00:10:41.250 "fsdev_io_cache_size": 256 00:10:41.250 } 00:10:41.250 } 00:10:41.250 ] 00:10:41.250 }, 00:10:41.250 { 00:10:41.250 "subsystem": "keyring", 00:10:41.250 "config": [] 00:10:41.250 }, 00:10:41.250 { 00:10:41.250 "subsystem": "iobuf", 00:10:41.250 "config": [ 00:10:41.250 { 00:10:41.250 "method": "iobuf_set_options", 00:10:41.250 "params": { 00:10:41.250 "small_pool_count": 8192, 00:10:41.250 "large_pool_count": 1024, 00:10:41.250 "small_bufsize": 8192, 00:10:41.250 "large_bufsize": 135168, 00:10:41.250 "enable_numa": false 00:10:41.250 } 00:10:41.250 } 00:10:41.250 ] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "sock", 00:10:41.251 "config": [ 00:10:41.251 { 00:10:41.251 "method": "sock_set_default_impl", 00:10:41.251 "params": { 00:10:41.251 "impl_name": "posix" 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "sock_impl_set_options", 00:10:41.251 "params": { 00:10:41.251 "impl_name": "ssl", 00:10:41.251 "recv_buf_size": 4096, 00:10:41.251 "send_buf_size": 4096, 00:10:41.251 "enable_recv_pipe": true, 00:10:41.251 "enable_quickack": false, 00:10:41.251 "enable_placement_id": 0, 00:10:41.251 "enable_zerocopy_send_server": true, 00:10:41.251 "enable_zerocopy_send_client": false, 00:10:41.251 "zerocopy_threshold": 0, 00:10:41.251 "tls_version": 0, 00:10:41.251 "enable_ktls": false 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "sock_impl_set_options", 00:10:41.251 "params": { 00:10:41.251 "impl_name": "posix", 00:10:41.251 "recv_buf_size": 2097152, 00:10:41.251 "send_buf_size": 2097152, 00:10:41.251 "enable_recv_pipe": true, 00:10:41.251 "enable_quickack": false, 00:10:41.251 "enable_placement_id": 0, 00:10:41.251 "enable_zerocopy_send_server": true, 00:10:41.251 "enable_zerocopy_send_client": false, 00:10:41.251 "zerocopy_threshold": 0, 00:10:41.251 "tls_version": 0, 00:10:41.251 "enable_ktls": false 00:10:41.251 } 00:10:41.251 } 00:10:41.251 ] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "vmd", 00:10:41.251 "config": [] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "accel", 00:10:41.251 "config": [ 00:10:41.251 { 00:10:41.251 "method": "accel_set_options", 00:10:41.251 "params": { 00:10:41.251 "small_cache_size": 128, 00:10:41.251 "large_cache_size": 16, 00:10:41.251 "task_count": 2048, 00:10:41.251 "sequence_count": 2048, 00:10:41.251 "buf_count": 2048 00:10:41.251 } 00:10:41.251 } 00:10:41.251 ] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "bdev", 00:10:41.251 "config": [ 00:10:41.251 { 00:10:41.251 "method": "bdev_set_options", 00:10:41.251 "params": { 00:10:41.251 "bdev_io_pool_size": 65535, 00:10:41.251 "bdev_io_cache_size": 256, 00:10:41.251 "bdev_auto_examine": true, 00:10:41.251 "iobuf_small_cache_size": 128, 00:10:41.251 "iobuf_large_cache_size": 16 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "bdev_raid_set_options", 00:10:41.251 "params": { 00:10:41.251 "process_window_size_kb": 1024, 00:10:41.251 "process_max_bandwidth_mb_sec": 0 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "bdev_iscsi_set_options", 00:10:41.251 "params": { 00:10:41.251 "timeout_sec": 30 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "bdev_nvme_set_options", 00:10:41.251 "params": { 00:10:41.251 "action_on_timeout": "none", 00:10:41.251 "timeout_us": 0, 00:10:41.251 "timeout_admin_us": 0, 00:10:41.251 "keep_alive_timeout_ms": 10000, 00:10:41.251 "arbitration_burst": 0, 00:10:41.251 "low_priority_weight": 0, 00:10:41.251 "medium_priority_weight": 0, 00:10:41.251 "high_priority_weight": 0, 00:10:41.251 "nvme_adminq_poll_period_us": 10000, 00:10:41.251 "nvme_ioq_poll_period_us": 0, 00:10:41.251 "io_queue_requests": 0, 00:10:41.251 "delay_cmd_submit": true, 00:10:41.251 "transport_retry_count": 4, 00:10:41.251 "bdev_retry_count": 3, 00:10:41.251 "transport_ack_timeout": 0, 00:10:41.251 "ctrlr_loss_timeout_sec": 0, 00:10:41.251 "reconnect_delay_sec": 0, 00:10:41.251 "fast_io_fail_timeout_sec": 0, 00:10:41.251 "disable_auto_failback": false, 00:10:41.251 "generate_uuids": false, 00:10:41.251 "transport_tos": 0, 00:10:41.251 "nvme_error_stat": false, 00:10:41.251 "rdma_srq_size": 0, 00:10:41.251 "io_path_stat": false, 00:10:41.251 "allow_accel_sequence": false, 00:10:41.251 "rdma_max_cq_size": 0, 00:10:41.251 "rdma_cm_event_timeout_ms": 0, 00:10:41.251 "dhchap_digests": [ 00:10:41.251 "sha256", 00:10:41.251 "sha384", 00:10:41.251 "sha512" 00:10:41.251 ], 00:10:41.251 "dhchap_dhgroups": [ 00:10:41.251 "null", 00:10:41.251 "ffdhe2048", 00:10:41.251 "ffdhe3072", 00:10:41.251 "ffdhe4096", 00:10:41.251 "ffdhe6144", 00:10:41.251 "ffdhe8192" 00:10:41.251 ] 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "bdev_nvme_set_hotplug", 00:10:41.251 "params": { 00:10:41.251 "period_us": 100000, 00:10:41.251 "enable": false 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "bdev_wait_for_examine" 00:10:41.251 } 00:10:41.251 ] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "scsi", 00:10:41.251 "config": null 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "scheduler", 00:10:41.251 "config": [ 00:10:41.251 { 00:10:41.251 "method": "framework_set_scheduler", 00:10:41.251 "params": { 00:10:41.251 "name": "static" 00:10:41.251 } 00:10:41.251 } 00:10:41.251 ] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "vhost_scsi", 00:10:41.251 "config": [] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "vhost_blk", 00:10:41.251 "config": [] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "ublk", 00:10:41.251 "config": [] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "nbd", 00:10:41.251 "config": [] 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "subsystem": "nvmf", 00:10:41.251 "config": [ 00:10:41.251 { 00:10:41.251 "method": "nvmf_set_config", 00:10:41.251 "params": { 00:10:41.251 "discovery_filter": "match_any", 00:10:41.251 "admin_cmd_passthru": { 00:10:41.251 "identify_ctrlr": false 00:10:41.251 }, 00:10:41.251 "dhchap_digests": [ 00:10:41.251 "sha256", 00:10:41.251 "sha384", 00:10:41.251 "sha512" 00:10:41.251 ], 00:10:41.251 "dhchap_dhgroups": [ 00:10:41.251 "null", 00:10:41.251 "ffdhe2048", 00:10:41.251 "ffdhe3072", 00:10:41.251 "ffdhe4096", 00:10:41.251 "ffdhe6144", 00:10:41.251 "ffdhe8192" 00:10:41.251 ] 00:10:41.251 } 00:10:41.251 }, 00:10:41.251 { 00:10:41.251 "method": "nvmf_set_max_subsystems", 00:10:41.251 "params": { 00:10:41.252 "max_subsystems": 1024 00:10:41.252 } 00:10:41.252 }, 00:10:41.252 { 00:10:41.252 "method": "nvmf_set_crdt", 00:10:41.252 "params": { 00:10:41.252 "crdt1": 0, 00:10:41.252 "crdt2": 0, 00:10:41.252 "crdt3": 0 00:10:41.252 } 00:10:41.252 }, 00:10:41.252 { 00:10:41.252 "method": "nvmf_create_transport", 00:10:41.252 "params": { 00:10:41.252 "trtype": "TCP", 00:10:41.252 "max_queue_depth": 128, 00:10:41.252 "max_io_qpairs_per_ctrlr": 127, 00:10:41.252 "in_capsule_data_size": 4096, 00:10:41.252 "max_io_size": 131072, 00:10:41.252 "io_unit_size": 131072, 00:10:41.252 "max_aq_depth": 128, 00:10:41.252 "num_shared_buffers": 511, 00:10:41.252 "buf_cache_size": 4294967295, 00:10:41.252 "dif_insert_or_strip": false, 00:10:41.252 "zcopy": false, 00:10:41.252 "c2h_success": true, 00:10:41.252 "sock_priority": 0, 00:10:41.252 "abort_timeout_sec": 1, 00:10:41.252 "ack_timeout": 0, 00:10:41.252 "data_wr_pool_size": 0 00:10:41.252 } 00:10:41.252 } 00:10:41.252 ] 00:10:41.252 }, 00:10:41.252 { 00:10:41.252 "subsystem": "iscsi", 00:10:41.252 "config": [ 00:10:41.252 { 00:10:41.252 "method": "iscsi_set_options", 00:10:41.252 "params": { 00:10:41.252 "node_base": "iqn.2016-06.io.spdk", 00:10:41.252 "max_sessions": 128, 00:10:41.252 "max_connections_per_session": 2, 00:10:41.252 "max_queue_depth": 64, 00:10:41.252 "default_time2wait": 2, 00:10:41.252 "default_time2retain": 20, 00:10:41.252 "first_burst_length": 8192, 00:10:41.252 "immediate_data": true, 00:10:41.252 "allow_duplicated_isid": false, 00:10:41.252 "error_recovery_level": 0, 00:10:41.252 "nop_timeout": 60, 00:10:41.252 "nop_in_interval": 30, 00:10:41.252 "disable_chap": false, 00:10:41.252 "require_chap": false, 00:10:41.252 "mutual_chap": false, 00:10:41.252 "chap_group": 0, 00:10:41.252 "max_large_datain_per_connection": 64, 00:10:41.252 "max_r2t_per_connection": 4, 00:10:41.252 "pdu_pool_size": 36864, 00:10:41.252 "immediate_data_pool_size": 16384, 00:10:41.252 "data_out_pool_size": 2048 00:10:41.252 } 00:10:41.252 } 00:10:41.252 ] 00:10:41.252 } 00:10:41.252 ] 00:10:41.252 } 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57299 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57299 ']' 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57299 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57299 00:10:41.252 killing process with pid 57299 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57299' 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57299 00:10:41.252 13:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57299 00:10:43.801 13:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57350 00:10:43.801 13:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:43.801 13:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57350 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57350 ']' 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57350 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57350 00:10:49.066 killing process with pid 57350 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57350' 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57350 00:10:49.066 13:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57350 00:10:51.086 13:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:51.087 ************************************ 00:10:51.087 END TEST skip_rpc_with_json 00:10:51.087 ************************************ 00:10:51.087 00:10:51.087 real 0m11.255s 00:10:51.087 user 0m10.500s 00:10:51.087 sys 0m1.218s 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:51.087 13:04:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.087 ************************************ 00:10:51.087 START TEST skip_rpc_with_delay 00:10:51.087 ************************************ 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:51.087 [2024-12-06 13:04:57.349081] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:51.087 00:10:51.087 real 0m0.198s 00:10:51.087 user 0m0.103s 00:10:51.087 sys 0m0.094s 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.087 13:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:51.087 ************************************ 00:10:51.087 END TEST skip_rpc_with_delay 00:10:51.087 ************************************ 00:10:51.087 13:04:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:51.087 13:04:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:51.087 13:04:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.087 13:04:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.087 ************************************ 00:10:51.087 START TEST exit_on_failed_rpc_init 00:10:51.087 ************************************ 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57484 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57484 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57484 ']' 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.087 13:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:51.368 [2024-12-06 13:04:57.627086] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:51.368 [2024-12-06 13:04:57.628031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57484 ] 00:10:51.368 [2024-12-06 13:04:57.821544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.625 [2024-12-06 13:04:57.962684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:52.560 13:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:52.560 [2024-12-06 13:04:59.043147] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:52.560 [2024-12-06 13:04:59.043689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57502 ] 00:10:52.818 [2024-12-06 13:04:59.240575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.077 [2024-12-06 13:04:59.395486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.077 [2024-12-06 13:04:59.395890] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:53.077 [2024-12-06 13:04:59.396064] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:53.077 [2024-12-06 13:04:59.396106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57484 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57484 ']' 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57484 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57484 00:10:53.335 killing process with pid 57484 00:10:53.335 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.336 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.336 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57484' 00:10:53.336 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57484 00:10:53.336 13:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57484 00:10:55.871 00:10:55.871 real 0m4.565s 00:10:55.871 user 0m4.906s 00:10:55.871 sys 0m0.855s 00:10:55.871 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.871 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 ************************************ 00:10:55.871 END TEST exit_on_failed_rpc_init 00:10:55.871 ************************************ 00:10:55.871 13:05:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:55.871 ************************************ 00:10:55.871 END TEST skip_rpc 00:10:55.871 ************************************ 00:10:55.871 00:10:55.871 real 0m23.803s 00:10:55.871 user 0m22.421s 00:10:55.871 sys 0m2.930s 00:10:55.871 13:05:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.871 13:05:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 13:05:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:55.871 13:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.871 13:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.871 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 ************************************ 00:10:55.871 START TEST rpc_client 00:10:55.871 ************************************ 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:55.871 * Looking for test storage... 00:10:55.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.871 13:05:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.871 --rc genhtml_branch_coverage=1 00:10:55.871 --rc genhtml_function_coverage=1 00:10:55.871 --rc genhtml_legend=1 00:10:55.871 --rc geninfo_all_blocks=1 00:10:55.871 --rc geninfo_unexecuted_blocks=1 00:10:55.871 00:10:55.871 ' 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.871 --rc genhtml_branch_coverage=1 00:10:55.871 --rc genhtml_function_coverage=1 00:10:55.871 --rc genhtml_legend=1 00:10:55.871 --rc geninfo_all_blocks=1 00:10:55.871 --rc geninfo_unexecuted_blocks=1 00:10:55.871 00:10:55.871 ' 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.871 --rc genhtml_branch_coverage=1 00:10:55.871 --rc genhtml_function_coverage=1 00:10:55.871 --rc genhtml_legend=1 00:10:55.871 --rc geninfo_all_blocks=1 00:10:55.871 --rc geninfo_unexecuted_blocks=1 00:10:55.871 00:10:55.871 ' 00:10:55.871 13:05:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.871 --rc genhtml_branch_coverage=1 00:10:55.871 --rc genhtml_function_coverage=1 00:10:55.871 --rc genhtml_legend=1 00:10:55.871 --rc geninfo_all_blocks=1 00:10:55.871 --rc geninfo_unexecuted_blocks=1 00:10:55.871 00:10:55.871 ' 00:10:55.871 13:05:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:56.129 OK 00:10:56.129 13:05:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:56.129 00:10:56.129 real 0m0.272s 00:10:56.129 user 0m0.155s 00:10:56.129 sys 0m0.124s 00:10:56.129 13:05:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.129 13:05:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:56.129 ************************************ 00:10:56.129 END TEST rpc_client 00:10:56.129 ************************************ 00:10:56.129 13:05:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:56.129 13:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.129 13:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.129 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:10:56.129 ************************************ 00:10:56.129 START TEST json_config 00:10:56.129 ************************************ 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.129 13:05:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.129 13:05:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.129 13:05:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.129 13:05:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.129 13:05:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.129 13:05:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:56.129 13:05:02 json_config -- scripts/common.sh@345 -- # : 1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.129 13:05:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.129 13:05:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@353 -- # local d=1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.129 13:05:02 json_config -- scripts/common.sh@355 -- # echo 1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.129 13:05:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@353 -- # local d=2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.129 13:05:02 json_config -- scripts/common.sh@355 -- # echo 2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.129 13:05:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.129 13:05:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.129 13:05:02 json_config -- scripts/common.sh@368 -- # return 0 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.129 --rc genhtml_branch_coverage=1 00:10:56.129 --rc genhtml_function_coverage=1 00:10:56.129 --rc genhtml_legend=1 00:10:56.129 --rc geninfo_all_blocks=1 00:10:56.129 --rc geninfo_unexecuted_blocks=1 00:10:56.129 00:10:56.129 ' 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.129 --rc genhtml_branch_coverage=1 00:10:56.129 --rc genhtml_function_coverage=1 00:10:56.129 --rc genhtml_legend=1 00:10:56.129 --rc geninfo_all_blocks=1 00:10:56.129 --rc geninfo_unexecuted_blocks=1 00:10:56.129 00:10:56.129 ' 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.129 --rc genhtml_branch_coverage=1 00:10:56.129 --rc genhtml_function_coverage=1 00:10:56.129 --rc genhtml_legend=1 00:10:56.129 --rc geninfo_all_blocks=1 00:10:56.129 --rc geninfo_unexecuted_blocks=1 00:10:56.129 00:10:56.129 ' 00:10:56.129 13:05:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:56.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.129 --rc genhtml_branch_coverage=1 00:10:56.129 --rc genhtml_function_coverage=1 00:10:56.129 --rc genhtml_legend=1 00:10:56.129 --rc geninfo_all_blocks=1 00:10:56.129 --rc geninfo_unexecuted_blocks=1 00:10:56.129 00:10:56.129 ' 00:10:56.129 13:05:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.129 13:05:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c28d152-baac-47ce-8835-611fa8ea9449 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9c28d152-baac-47ce-8835-611fa8ea9449 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.388 13:05:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.388 13:05:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.388 13:05:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.388 13:05:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.388 13:05:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.388 13:05:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.388 13:05:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.388 13:05:02 json_config -- paths/export.sh@5 -- # export PATH 00:10:56.388 13:05:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@51 -- # : 0 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.388 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.388 13:05:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:56.388 WARNING: No tests are enabled so not running JSON configuration tests 00:10:56.388 13:05:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:56.388 00:10:56.388 real 0m0.198s 00:10:56.388 user 0m0.127s 00:10:56.388 sys 0m0.072s 00:10:56.388 ************************************ 00:10:56.388 END TEST json_config 00:10:56.388 ************************************ 00:10:56.388 13:05:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.388 13:05:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:56.388 13:05:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:56.388 13:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.388 13:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.388 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:10:56.388 ************************************ 00:10:56.388 START TEST json_config_extra_key 00:10:56.388 ************************************ 00:10:56.388 13:05:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:56.388 13:05:02 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:56.388 13:05:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:56.388 13:05:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:56.388 13:05:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:56.388 13:05:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.389 13:05:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:56.389 13:05:02 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.389 13:05:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.389 --rc genhtml_branch_coverage=1 00:10:56.389 --rc genhtml_function_coverage=1 00:10:56.389 --rc genhtml_legend=1 00:10:56.389 --rc geninfo_all_blocks=1 00:10:56.389 --rc geninfo_unexecuted_blocks=1 00:10:56.389 00:10:56.389 ' 00:10:56.389 13:05:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.389 --rc genhtml_branch_coverage=1 00:10:56.389 --rc genhtml_function_coverage=1 00:10:56.389 --rc genhtml_legend=1 00:10:56.389 --rc geninfo_all_blocks=1 00:10:56.389 --rc geninfo_unexecuted_blocks=1 00:10:56.389 00:10:56.389 ' 00:10:56.389 13:05:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.389 --rc genhtml_branch_coverage=1 00:10:56.389 --rc genhtml_function_coverage=1 00:10:56.389 --rc genhtml_legend=1 00:10:56.389 --rc geninfo_all_blocks=1 00:10:56.389 --rc geninfo_unexecuted_blocks=1 00:10:56.389 00:10:56.389 ' 00:10:56.389 13:05:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:56.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.389 --rc genhtml_branch_coverage=1 00:10:56.389 --rc genhtml_function_coverage=1 00:10:56.389 --rc genhtml_legend=1 00:10:56.389 --rc geninfo_all_blocks=1 00:10:56.389 --rc geninfo_unexecuted_blocks=1 00:10:56.389 00:10:56.389 ' 00:10:56.389 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.389 13:05:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9c28d152-baac-47ce-8835-611fa8ea9449 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9c28d152-baac-47ce-8835-611fa8ea9449 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.648 13:05:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.648 13:05:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.648 13:05:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.648 13:05:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.648 13:05:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.648 13:05:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.648 13:05:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.648 13:05:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:56.648 13:05:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.648 13:05:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:56.648 INFO: launching applications... 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:56.648 13:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57712 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:56.648 Waiting for target to run... 00:10:56.648 13:05:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57712 /var/tmp/spdk_tgt.sock 00:10:56.648 13:05:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57712 ']' 00:10:56.649 13:05:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:56.649 13:05:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.649 13:05:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:56.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:56.649 13:05:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.649 13:05:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:56.649 [2024-12-06 13:05:03.077502] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:10:56.649 [2024-12-06 13:05:03.078151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57712 ] 00:10:57.215 [2024-12-06 13:05:03.695320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.473 [2024-12-06 13:05:03.857293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.041 00:10:58.041 INFO: shutting down applications... 00:10:58.041 13:05:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.041 13:05:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:58.041 13:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:58.041 13:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57712 ]] 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57712 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:10:58.041 13:05:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:58.609 13:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:58.609 13:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:58.609 13:05:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:10:58.609 13:05:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:59.175 13:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:59.175 13:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:59.175 13:05:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:10:59.175 13:05:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:59.771 13:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:59.771 13:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:59.771 13:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:10:59.771 13:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:00.339 13:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:00.339 13:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:00.339 13:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:11:00.339 13:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:00.597 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:00.597 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:00.597 13:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:11:00.597 13:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57712 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:01.164 SPDK target shutdown done 00:11:01.164 Success 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:01.164 13:05:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:01.164 13:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:01.164 00:11:01.164 real 0m4.841s 00:11:01.164 user 0m4.176s 00:11:01.164 sys 0m0.836s 00:11:01.164 ************************************ 00:11:01.164 END TEST json_config_extra_key 00:11:01.164 ************************************ 00:11:01.164 13:05:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.164 13:05:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:01.165 13:05:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:01.165 13:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.165 13:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.165 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.165 ************************************ 00:11:01.165 START TEST alias_rpc 00:11:01.165 ************************************ 00:11:01.165 13:05:07 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:01.424 * Looking for test storage... 00:11:01.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.424 13:05:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.424 --rc genhtml_branch_coverage=1 00:11:01.424 --rc genhtml_function_coverage=1 00:11:01.424 --rc genhtml_legend=1 00:11:01.424 --rc geninfo_all_blocks=1 00:11:01.424 --rc geninfo_unexecuted_blocks=1 00:11:01.424 00:11:01.424 ' 00:11:01.424 13:05:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.425 --rc genhtml_branch_coverage=1 00:11:01.425 --rc genhtml_function_coverage=1 00:11:01.425 --rc genhtml_legend=1 00:11:01.425 --rc geninfo_all_blocks=1 00:11:01.425 --rc geninfo_unexecuted_blocks=1 00:11:01.425 00:11:01.425 ' 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.425 --rc genhtml_branch_coverage=1 00:11:01.425 --rc genhtml_function_coverage=1 00:11:01.425 --rc genhtml_legend=1 00:11:01.425 --rc geninfo_all_blocks=1 00:11:01.425 --rc geninfo_unexecuted_blocks=1 00:11:01.425 00:11:01.425 ' 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.425 --rc genhtml_branch_coverage=1 00:11:01.425 --rc genhtml_function_coverage=1 00:11:01.425 --rc genhtml_legend=1 00:11:01.425 --rc geninfo_all_blocks=1 00:11:01.425 --rc geninfo_unexecuted_blocks=1 00:11:01.425 00:11:01.425 ' 00:11:01.425 13:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:01.425 13:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57829 00:11:01.425 13:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57829 00:11:01.425 13:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57829 ']' 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.425 13:05:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.684 [2024-12-06 13:05:07.990316] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:01.684 [2024-12-06 13:05:07.991007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57829 ] 00:11:01.684 [2024-12-06 13:05:08.178404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.942 [2024-12-06 13:05:08.326124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.959 13:05:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.959 13:05:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:02.959 13:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:03.260 13:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57829 00:11:03.260 13:05:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57829 ']' 00:11:03.260 13:05:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57829 00:11:03.260 13:05:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:03.260 13:05:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.260 13:05:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57829 00:11:03.517 killing process with pid 57829 00:11:03.517 13:05:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.517 13:05:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.517 13:05:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57829' 00:11:03.517 13:05:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 57829 00:11:03.517 13:05:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 57829 00:11:06.049 ************************************ 00:11:06.049 END TEST alias_rpc 00:11:06.049 ************************************ 00:11:06.049 00:11:06.049 real 0m4.637s 00:11:06.049 user 0m4.746s 00:11:06.049 sys 0m0.817s 00:11:06.049 13:05:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.049 13:05:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.049 13:05:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:06.049 13:05:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:06.049 13:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.049 13:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.049 13:05:12 -- common/autotest_common.sh@10 -- # set +x 00:11:06.049 ************************************ 00:11:06.049 START TEST spdkcli_tcp 00:11:06.049 ************************************ 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:06.049 * Looking for test storage... 00:11:06.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.049 13:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.049 --rc genhtml_branch_coverage=1 00:11:06.049 --rc genhtml_function_coverage=1 00:11:06.049 --rc genhtml_legend=1 00:11:06.049 --rc geninfo_all_blocks=1 00:11:06.049 --rc geninfo_unexecuted_blocks=1 00:11:06.049 00:11:06.049 ' 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.049 --rc genhtml_branch_coverage=1 00:11:06.049 --rc genhtml_function_coverage=1 00:11:06.049 --rc genhtml_legend=1 00:11:06.049 --rc geninfo_all_blocks=1 00:11:06.049 --rc geninfo_unexecuted_blocks=1 00:11:06.049 00:11:06.049 ' 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.049 --rc genhtml_branch_coverage=1 00:11:06.049 --rc genhtml_function_coverage=1 00:11:06.049 --rc genhtml_legend=1 00:11:06.049 --rc geninfo_all_blocks=1 00:11:06.049 --rc geninfo_unexecuted_blocks=1 00:11:06.049 00:11:06.049 ' 00:11:06.049 13:05:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.049 --rc genhtml_branch_coverage=1 00:11:06.049 --rc genhtml_function_coverage=1 00:11:06.049 --rc genhtml_legend=1 00:11:06.049 --rc geninfo_all_blocks=1 00:11:06.049 --rc geninfo_unexecuted_blocks=1 00:11:06.049 00:11:06.049 ' 00:11:06.049 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:06.049 13:05:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:06.049 13:05:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:06.049 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57942 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:06.050 13:05:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57942 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57942 ']' 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.050 13:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.309 [2024-12-06 13:05:12.703726] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:06.309 [2024-12-06 13:05:12.704697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57942 ] 00:11:06.568 [2024-12-06 13:05:12.890900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:06.568 [2024-12-06 13:05:13.039815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.568 [2024-12-06 13:05:13.039840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.502 13:05:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.502 13:05:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:07.502 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57964 00:11:07.502 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:07.502 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:07.761 [ 00:11:07.761 "bdev_malloc_delete", 00:11:07.761 "bdev_malloc_create", 00:11:07.761 "bdev_null_resize", 00:11:07.761 "bdev_null_delete", 00:11:07.761 "bdev_null_create", 00:11:07.761 "bdev_nvme_cuse_unregister", 00:11:07.761 "bdev_nvme_cuse_register", 00:11:07.761 "bdev_opal_new_user", 00:11:07.761 "bdev_opal_set_lock_state", 00:11:07.761 "bdev_opal_delete", 00:11:07.762 "bdev_opal_get_info", 00:11:07.762 "bdev_opal_create", 00:11:07.762 "bdev_nvme_opal_revert", 00:11:07.762 "bdev_nvme_opal_init", 00:11:07.762 "bdev_nvme_send_cmd", 00:11:07.762 "bdev_nvme_set_keys", 00:11:07.762 "bdev_nvme_get_path_iostat", 00:11:07.762 "bdev_nvme_get_mdns_discovery_info", 00:11:07.762 "bdev_nvme_stop_mdns_discovery", 00:11:07.762 "bdev_nvme_start_mdns_discovery", 00:11:07.762 "bdev_nvme_set_multipath_policy", 00:11:07.762 "bdev_nvme_set_preferred_path", 00:11:07.762 "bdev_nvme_get_io_paths", 00:11:07.762 "bdev_nvme_remove_error_injection", 00:11:07.762 "bdev_nvme_add_error_injection", 00:11:07.762 "bdev_nvme_get_discovery_info", 00:11:07.762 "bdev_nvme_stop_discovery", 00:11:07.762 "bdev_nvme_start_discovery", 00:11:07.762 "bdev_nvme_get_controller_health_info", 00:11:07.762 "bdev_nvme_disable_controller", 00:11:07.762 "bdev_nvme_enable_controller", 00:11:07.762 "bdev_nvme_reset_controller", 00:11:07.762 "bdev_nvme_get_transport_statistics", 00:11:07.762 "bdev_nvme_apply_firmware", 00:11:07.762 "bdev_nvme_detach_controller", 00:11:07.762 "bdev_nvme_get_controllers", 00:11:07.762 "bdev_nvme_attach_controller", 00:11:07.762 "bdev_nvme_set_hotplug", 00:11:07.762 "bdev_nvme_set_options", 00:11:07.762 "bdev_passthru_delete", 00:11:07.762 "bdev_passthru_create", 00:11:07.762 "bdev_lvol_set_parent_bdev", 00:11:07.762 "bdev_lvol_set_parent", 00:11:07.762 "bdev_lvol_check_shallow_copy", 00:11:07.762 "bdev_lvol_start_shallow_copy", 00:11:07.762 "bdev_lvol_grow_lvstore", 00:11:07.762 "bdev_lvol_get_lvols", 00:11:07.762 "bdev_lvol_get_lvstores", 00:11:07.762 "bdev_lvol_delete", 00:11:07.762 "bdev_lvol_set_read_only", 00:11:07.762 "bdev_lvol_resize", 00:11:07.762 "bdev_lvol_decouple_parent", 00:11:07.762 "bdev_lvol_inflate", 00:11:07.762 "bdev_lvol_rename", 00:11:07.762 "bdev_lvol_clone_bdev", 00:11:07.762 "bdev_lvol_clone", 00:11:07.762 "bdev_lvol_snapshot", 00:11:07.762 "bdev_lvol_create", 00:11:07.762 "bdev_lvol_delete_lvstore", 00:11:07.762 "bdev_lvol_rename_lvstore", 00:11:07.762 "bdev_lvol_create_lvstore", 00:11:07.762 "bdev_raid_set_options", 00:11:07.762 "bdev_raid_remove_base_bdev", 00:11:07.762 "bdev_raid_add_base_bdev", 00:11:07.762 "bdev_raid_delete", 00:11:07.762 "bdev_raid_create", 00:11:07.762 "bdev_raid_get_bdevs", 00:11:07.762 "bdev_error_inject_error", 00:11:07.762 "bdev_error_delete", 00:11:07.762 "bdev_error_create", 00:11:07.762 "bdev_split_delete", 00:11:07.762 "bdev_split_create", 00:11:07.762 "bdev_delay_delete", 00:11:07.762 "bdev_delay_create", 00:11:07.762 "bdev_delay_update_latency", 00:11:07.762 "bdev_zone_block_delete", 00:11:07.762 "bdev_zone_block_create", 00:11:07.762 "blobfs_create", 00:11:07.762 "blobfs_detect", 00:11:07.762 "blobfs_set_cache_size", 00:11:07.762 "bdev_aio_delete", 00:11:07.762 "bdev_aio_rescan", 00:11:07.762 "bdev_aio_create", 00:11:07.762 "bdev_ftl_set_property", 00:11:07.762 "bdev_ftl_get_properties", 00:11:07.762 "bdev_ftl_get_stats", 00:11:07.762 "bdev_ftl_unmap", 00:11:07.762 "bdev_ftl_unload", 00:11:07.762 "bdev_ftl_delete", 00:11:07.762 "bdev_ftl_load", 00:11:07.762 "bdev_ftl_create", 00:11:07.762 "bdev_virtio_attach_controller", 00:11:07.762 "bdev_virtio_scsi_get_devices", 00:11:07.762 "bdev_virtio_detach_controller", 00:11:07.762 "bdev_virtio_blk_set_hotplug", 00:11:07.762 "bdev_iscsi_delete", 00:11:07.762 "bdev_iscsi_create", 00:11:07.762 "bdev_iscsi_set_options", 00:11:07.762 "accel_error_inject_error", 00:11:07.762 "ioat_scan_accel_module", 00:11:07.762 "dsa_scan_accel_module", 00:11:07.762 "iaa_scan_accel_module", 00:11:07.762 "keyring_file_remove_key", 00:11:07.762 "keyring_file_add_key", 00:11:07.762 "keyring_linux_set_options", 00:11:07.762 "fsdev_aio_delete", 00:11:07.762 "fsdev_aio_create", 00:11:07.762 "iscsi_get_histogram", 00:11:07.762 "iscsi_enable_histogram", 00:11:07.762 "iscsi_set_options", 00:11:07.762 "iscsi_get_auth_groups", 00:11:07.762 "iscsi_auth_group_remove_secret", 00:11:07.762 "iscsi_auth_group_add_secret", 00:11:07.762 "iscsi_delete_auth_group", 00:11:07.762 "iscsi_create_auth_group", 00:11:07.762 "iscsi_set_discovery_auth", 00:11:07.762 "iscsi_get_options", 00:11:07.762 "iscsi_target_node_request_logout", 00:11:07.762 "iscsi_target_node_set_redirect", 00:11:07.762 "iscsi_target_node_set_auth", 00:11:07.762 "iscsi_target_node_add_lun", 00:11:07.762 "iscsi_get_stats", 00:11:07.762 "iscsi_get_connections", 00:11:07.762 "iscsi_portal_group_set_auth", 00:11:07.762 "iscsi_start_portal_group", 00:11:07.762 "iscsi_delete_portal_group", 00:11:07.762 "iscsi_create_portal_group", 00:11:07.762 "iscsi_get_portal_groups", 00:11:07.762 "iscsi_delete_target_node", 00:11:07.762 "iscsi_target_node_remove_pg_ig_maps", 00:11:07.762 "iscsi_target_node_add_pg_ig_maps", 00:11:07.762 "iscsi_create_target_node", 00:11:07.762 "iscsi_get_target_nodes", 00:11:07.762 "iscsi_delete_initiator_group", 00:11:07.762 "iscsi_initiator_group_remove_initiators", 00:11:07.762 "iscsi_initiator_group_add_initiators", 00:11:07.762 "iscsi_create_initiator_group", 00:11:07.762 "iscsi_get_initiator_groups", 00:11:07.762 "nvmf_set_crdt", 00:11:07.762 "nvmf_set_config", 00:11:07.762 "nvmf_set_max_subsystems", 00:11:07.762 "nvmf_stop_mdns_prr", 00:11:07.762 "nvmf_publish_mdns_prr", 00:11:07.762 "nvmf_subsystem_get_listeners", 00:11:07.762 "nvmf_subsystem_get_qpairs", 00:11:07.762 "nvmf_subsystem_get_controllers", 00:11:07.762 "nvmf_get_stats", 00:11:07.762 "nvmf_get_transports", 00:11:07.762 "nvmf_create_transport", 00:11:07.762 "nvmf_get_targets", 00:11:07.762 "nvmf_delete_target", 00:11:07.762 "nvmf_create_target", 00:11:07.762 "nvmf_subsystem_allow_any_host", 00:11:07.762 "nvmf_subsystem_set_keys", 00:11:07.762 "nvmf_subsystem_remove_host", 00:11:07.762 "nvmf_subsystem_add_host", 00:11:07.762 "nvmf_ns_remove_host", 00:11:07.762 "nvmf_ns_add_host", 00:11:07.762 "nvmf_subsystem_remove_ns", 00:11:07.762 "nvmf_subsystem_set_ns_ana_group", 00:11:07.762 "nvmf_subsystem_add_ns", 00:11:07.762 "nvmf_subsystem_listener_set_ana_state", 00:11:07.762 "nvmf_discovery_get_referrals", 00:11:07.762 "nvmf_discovery_remove_referral", 00:11:07.762 "nvmf_discovery_add_referral", 00:11:07.762 "nvmf_subsystem_remove_listener", 00:11:07.762 "nvmf_subsystem_add_listener", 00:11:07.762 "nvmf_delete_subsystem", 00:11:07.762 "nvmf_create_subsystem", 00:11:07.762 "nvmf_get_subsystems", 00:11:07.762 "env_dpdk_get_mem_stats", 00:11:07.762 "nbd_get_disks", 00:11:07.762 "nbd_stop_disk", 00:11:07.762 "nbd_start_disk", 00:11:07.762 "ublk_recover_disk", 00:11:07.762 "ublk_get_disks", 00:11:07.762 "ublk_stop_disk", 00:11:07.762 "ublk_start_disk", 00:11:07.762 "ublk_destroy_target", 00:11:07.762 "ublk_create_target", 00:11:07.762 "virtio_blk_create_transport", 00:11:07.762 "virtio_blk_get_transports", 00:11:07.762 "vhost_controller_set_coalescing", 00:11:07.762 "vhost_get_controllers", 00:11:07.762 "vhost_delete_controller", 00:11:07.762 "vhost_create_blk_controller", 00:11:07.762 "vhost_scsi_controller_remove_target", 00:11:07.762 "vhost_scsi_controller_add_target", 00:11:07.762 "vhost_start_scsi_controller", 00:11:07.762 "vhost_create_scsi_controller", 00:11:07.762 "thread_set_cpumask", 00:11:07.762 "scheduler_set_options", 00:11:07.762 "framework_get_governor", 00:11:07.762 "framework_get_scheduler", 00:11:07.762 "framework_set_scheduler", 00:11:07.762 "framework_get_reactors", 00:11:07.762 "thread_get_io_channels", 00:11:07.762 "thread_get_pollers", 00:11:07.762 "thread_get_stats", 00:11:07.762 "framework_monitor_context_switch", 00:11:07.762 "spdk_kill_instance", 00:11:07.762 "log_enable_timestamps", 00:11:07.762 "log_get_flags", 00:11:07.762 "log_clear_flag", 00:11:07.762 "log_set_flag", 00:11:07.762 "log_get_level", 00:11:07.762 "log_set_level", 00:11:07.762 "log_get_print_level", 00:11:07.762 "log_set_print_level", 00:11:07.762 "framework_enable_cpumask_locks", 00:11:07.762 "framework_disable_cpumask_locks", 00:11:07.762 "framework_wait_init", 00:11:07.762 "framework_start_init", 00:11:07.762 "scsi_get_devices", 00:11:07.762 "bdev_get_histogram", 00:11:07.762 "bdev_enable_histogram", 00:11:07.762 "bdev_set_qos_limit", 00:11:07.762 "bdev_set_qd_sampling_period", 00:11:07.762 "bdev_get_bdevs", 00:11:07.762 "bdev_reset_iostat", 00:11:07.762 "bdev_get_iostat", 00:11:07.762 "bdev_examine", 00:11:07.762 "bdev_wait_for_examine", 00:11:07.762 "bdev_set_options", 00:11:07.762 "accel_get_stats", 00:11:07.762 "accel_set_options", 00:11:07.762 "accel_set_driver", 00:11:07.762 "accel_crypto_key_destroy", 00:11:07.762 "accel_crypto_keys_get", 00:11:07.762 "accel_crypto_key_create", 00:11:07.762 "accel_assign_opc", 00:11:07.762 "accel_get_module_info", 00:11:07.762 "accel_get_opc_assignments", 00:11:07.762 "vmd_rescan", 00:11:07.762 "vmd_remove_device", 00:11:07.762 "vmd_enable", 00:11:07.762 "sock_get_default_impl", 00:11:07.763 "sock_set_default_impl", 00:11:07.763 "sock_impl_set_options", 00:11:07.763 "sock_impl_get_options", 00:11:07.763 "iobuf_get_stats", 00:11:07.763 "iobuf_set_options", 00:11:07.763 "keyring_get_keys", 00:11:07.763 "framework_get_pci_devices", 00:11:07.763 "framework_get_config", 00:11:07.763 "framework_get_subsystems", 00:11:07.763 "fsdev_set_opts", 00:11:07.763 "fsdev_get_opts", 00:11:07.763 "trace_get_info", 00:11:07.763 "trace_get_tpoint_group_mask", 00:11:07.763 "trace_disable_tpoint_group", 00:11:07.763 "trace_enable_tpoint_group", 00:11:07.763 "trace_clear_tpoint_mask", 00:11:07.763 "trace_set_tpoint_mask", 00:11:07.763 "notify_get_notifications", 00:11:07.763 "notify_get_types", 00:11:07.763 "spdk_get_version", 00:11:07.763 "rpc_get_methods" 00:11:07.763 ] 00:11:07.763 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:07.763 13:05:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.763 13:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.022 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:08.022 13:05:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57942 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57942 ']' 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57942 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57942 00:11:08.022 killing process with pid 57942 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57942' 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57942 00:11:08.022 13:05:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57942 00:11:10.552 ************************************ 00:11:10.552 END TEST spdkcli_tcp 00:11:10.552 ************************************ 00:11:10.552 00:11:10.552 real 0m4.457s 00:11:10.552 user 0m7.835s 00:11:10.552 sys 0m0.854s 00:11:10.552 13:05:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.552 13:05:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.552 13:05:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:10.552 13:05:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.552 13:05:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.552 13:05:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.552 ************************************ 00:11:10.552 START TEST dpdk_mem_utility 00:11:10.552 ************************************ 00:11:10.552 13:05:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:10.552 * Looking for test storage... 00:11:10.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:10.552 13:05:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.552 13:05:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.552 13:05:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.552 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.552 13:05:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.553 13:05:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.553 --rc genhtml_branch_coverage=1 00:11:10.553 --rc genhtml_function_coverage=1 00:11:10.553 --rc genhtml_legend=1 00:11:10.553 --rc geninfo_all_blocks=1 00:11:10.553 --rc geninfo_unexecuted_blocks=1 00:11:10.553 00:11:10.553 ' 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.553 --rc genhtml_branch_coverage=1 00:11:10.553 --rc genhtml_function_coverage=1 00:11:10.553 --rc genhtml_legend=1 00:11:10.553 --rc geninfo_all_blocks=1 00:11:10.553 --rc geninfo_unexecuted_blocks=1 00:11:10.553 00:11:10.553 ' 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.553 --rc genhtml_branch_coverage=1 00:11:10.553 --rc genhtml_function_coverage=1 00:11:10.553 --rc genhtml_legend=1 00:11:10.553 --rc geninfo_all_blocks=1 00:11:10.553 --rc geninfo_unexecuted_blocks=1 00:11:10.553 00:11:10.553 ' 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.553 --rc genhtml_branch_coverage=1 00:11:10.553 --rc genhtml_function_coverage=1 00:11:10.553 --rc genhtml_legend=1 00:11:10.553 --rc geninfo_all_blocks=1 00:11:10.553 --rc geninfo_unexecuted_blocks=1 00:11:10.553 00:11:10.553 ' 00:11:10.553 13:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:10.553 13:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58069 00:11:10.553 13:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58069 00:11:10.553 13:05:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58069 ']' 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.553 13:05:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:10.872 [2024-12-06 13:05:17.201217] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:10.872 [2024-12-06 13:05:17.201771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58069 ] 00:11:10.872 [2024-12-06 13:05:17.387498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.130 [2024-12-06 13:05:17.568440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.065 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.065 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:12.065 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:12.065 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:12.065 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.065 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:12.065 { 00:11:12.065 "filename": "/tmp/spdk_mem_dump.txt" 00:11:12.065 } 00:11:12.065 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.065 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:12.326 DPDK memory size 824.000000 MiB in 1 heap(s) 00:11:12.326 1 heaps totaling size 824.000000 MiB 00:11:12.326 size: 824.000000 MiB heap id: 0 00:11:12.326 end heaps---------- 00:11:12.326 9 mempools totaling size 603.782043 MiB 00:11:12.326 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:12.326 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:12.326 size: 100.555481 MiB name: bdev_io_58069 00:11:12.326 size: 50.003479 MiB name: msgpool_58069 00:11:12.326 size: 36.509338 MiB name: fsdev_io_58069 00:11:12.326 size: 21.763794 MiB name: PDU_Pool 00:11:12.326 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:12.326 size: 4.133484 MiB name: evtpool_58069 00:11:12.326 size: 0.026123 MiB name: Session_Pool 00:11:12.326 end mempools------- 00:11:12.326 6 memzones totaling size 4.142822 MiB 00:11:12.326 size: 1.000366 MiB name: RG_ring_0_58069 00:11:12.326 size: 1.000366 MiB name: RG_ring_1_58069 00:11:12.326 size: 1.000366 MiB name: RG_ring_4_58069 00:11:12.326 size: 1.000366 MiB name: RG_ring_5_58069 00:11:12.326 size: 0.125366 MiB name: RG_ring_2_58069 00:11:12.326 size: 0.015991 MiB name: RG_ring_3_58069 00:11:12.326 end memzones------- 00:11:12.326 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:12.326 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:11:12.326 list of free elements. size: 16.781128 MiB 00:11:12.326 element at address: 0x200006400000 with size: 1.995972 MiB 00:11:12.326 element at address: 0x20000a600000 with size: 1.995972 MiB 00:11:12.326 element at address: 0x200003e00000 with size: 1.991028 MiB 00:11:12.326 element at address: 0x200019500040 with size: 0.999939 MiB 00:11:12.326 element at address: 0x200019900040 with size: 0.999939 MiB 00:11:12.326 element at address: 0x200019a00000 with size: 0.999084 MiB 00:11:12.326 element at address: 0x200032600000 with size: 0.994324 MiB 00:11:12.326 element at address: 0x200000400000 with size: 0.992004 MiB 00:11:12.326 element at address: 0x200019200000 with size: 0.959656 MiB 00:11:12.326 element at address: 0x200019d00040 with size: 0.936401 MiB 00:11:12.326 element at address: 0x200000200000 with size: 0.716980 MiB 00:11:12.326 element at address: 0x20001b400000 with size: 0.562683 MiB 00:11:12.326 element at address: 0x200000c00000 with size: 0.489197 MiB 00:11:12.326 element at address: 0x200019600000 with size: 0.487976 MiB 00:11:12.326 element at address: 0x200019e00000 with size: 0.485413 MiB 00:11:12.326 element at address: 0x200012c00000 with size: 0.433228 MiB 00:11:12.326 element at address: 0x200028800000 with size: 0.390442 MiB 00:11:12.326 element at address: 0x200000800000 with size: 0.350891 MiB 00:11:12.326 list of standard malloc elements. size: 199.287964 MiB 00:11:12.326 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:11:12.326 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:11:12.326 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:11:12.326 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:11:12.326 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:11:12.326 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:11:12.326 element at address: 0x200019deff40 with size: 0.062683 MiB 00:11:12.326 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:11:12.326 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:11:12.326 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:11:12.326 element at address: 0x200012bff040 with size: 0.000305 MiB 00:11:12.326 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:11:12.326 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:11:12.326 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200000cff000 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bff980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200019affc40 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200028863f40 with size: 0.000244 MiB 00:11:12.327 element at address: 0x200028864040 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886af80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886b980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886be80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886c980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886d980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886da80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886db80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886de80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886df80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886e980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f080 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f180 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f280 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f380 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f480 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f580 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f680 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f780 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f880 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886f980 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:11:12.327 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:11:12.327 list of memzone associated elements. size: 607.930908 MiB 00:11:12.327 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:11:12.327 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:12.327 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:11:12.328 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:12.328 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:11:12.328 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58069_0 00:11:12.328 element at address: 0x200000dff340 with size: 48.003113 MiB 00:11:12.328 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58069_0 00:11:12.328 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:11:12.328 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58069_0 00:11:12.328 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:11:12.328 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:12.328 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:11:12.328 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:12.328 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:11:12.328 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58069_0 00:11:12.328 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:11:12.328 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58069 00:11:12.328 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:11:12.328 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58069 00:11:12.328 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:11:12.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:12.328 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:11:12.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:12.328 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:11:12.328 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:12.328 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:11:12.328 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:12.328 element at address: 0x200000cff100 with size: 1.000549 MiB 00:11:12.328 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58069 00:11:12.328 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:11:12.328 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58069 00:11:12.328 element at address: 0x200019affd40 with size: 1.000549 MiB 00:11:12.328 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58069 00:11:12.328 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:11:12.328 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58069 00:11:12.328 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:11:12.328 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58069 00:11:12.328 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:11:12.328 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58069 00:11:12.328 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:11:12.328 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:12.328 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:11:12.328 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:12.328 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:11:12.328 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:12.328 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:11:12.328 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58069 00:11:12.328 element at address: 0x20000085df80 with size: 0.125549 MiB 00:11:12.328 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58069 00:11:12.328 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:11:12.328 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:12.328 element at address: 0x200028864140 with size: 0.023804 MiB 00:11:12.328 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:12.328 element at address: 0x200000859d40 with size: 0.016174 MiB 00:11:12.328 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58069 00:11:12.328 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:11:12.328 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:12.328 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:11:12.328 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58069 00:11:12.328 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:11:12.328 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58069 00:11:12.328 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:11:12.328 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58069 00:11:12.328 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:11:12.328 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:12.328 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:12.328 13:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58069 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58069 ']' 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58069 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58069 00:11:12.328 killing process with pid 58069 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58069' 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58069 00:11:12.328 13:05:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58069 00:11:14.858 ************************************ 00:11:14.858 END TEST dpdk_mem_utility 00:11:14.858 ************************************ 00:11:14.858 00:11:14.858 real 0m4.294s 00:11:14.858 user 0m4.190s 00:11:14.858 sys 0m0.795s 00:11:14.858 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.858 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:14.858 13:05:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:14.858 13:05:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.858 13:05:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.858 13:05:21 -- common/autotest_common.sh@10 -- # set +x 00:11:14.858 ************************************ 00:11:14.858 START TEST event 00:11:14.858 ************************************ 00:11:14.858 13:05:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:14.858 * Looking for test storage... 00:11:14.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:14.858 13:05:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.858 13:05:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.858 13:05:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.116 13:05:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.116 13:05:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.116 13:05:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.116 13:05:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.116 13:05:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.116 13:05:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.116 13:05:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.116 13:05:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.116 13:05:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.116 13:05:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.116 13:05:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.116 13:05:21 event -- scripts/common.sh@344 -- # case "$op" in 00:11:15.116 13:05:21 event -- scripts/common.sh@345 -- # : 1 00:11:15.116 13:05:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.116 13:05:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.116 13:05:21 event -- scripts/common.sh@365 -- # decimal 1 00:11:15.116 13:05:21 event -- scripts/common.sh@353 -- # local d=1 00:11:15.116 13:05:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.116 13:05:21 event -- scripts/common.sh@355 -- # echo 1 00:11:15.116 13:05:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.116 13:05:21 event -- scripts/common.sh@366 -- # decimal 2 00:11:15.116 13:05:21 event -- scripts/common.sh@353 -- # local d=2 00:11:15.116 13:05:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.116 13:05:21 event -- scripts/common.sh@355 -- # echo 2 00:11:15.116 13:05:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.116 13:05:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.116 13:05:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.116 13:05:21 event -- scripts/common.sh@368 -- # return 0 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.116 --rc genhtml_branch_coverage=1 00:11:15.116 --rc genhtml_function_coverage=1 00:11:15.116 --rc genhtml_legend=1 00:11:15.116 --rc geninfo_all_blocks=1 00:11:15.116 --rc geninfo_unexecuted_blocks=1 00:11:15.116 00:11:15.116 ' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.116 --rc genhtml_branch_coverage=1 00:11:15.116 --rc genhtml_function_coverage=1 00:11:15.116 --rc genhtml_legend=1 00:11:15.116 --rc geninfo_all_blocks=1 00:11:15.116 --rc geninfo_unexecuted_blocks=1 00:11:15.116 00:11:15.116 ' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.116 --rc genhtml_branch_coverage=1 00:11:15.116 --rc genhtml_function_coverage=1 00:11:15.116 --rc genhtml_legend=1 00:11:15.116 --rc geninfo_all_blocks=1 00:11:15.116 --rc geninfo_unexecuted_blocks=1 00:11:15.116 00:11:15.116 ' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.116 --rc genhtml_branch_coverage=1 00:11:15.116 --rc genhtml_function_coverage=1 00:11:15.116 --rc genhtml_legend=1 00:11:15.116 --rc geninfo_all_blocks=1 00:11:15.116 --rc geninfo_unexecuted_blocks=1 00:11:15.116 00:11:15.116 ' 00:11:15.116 13:05:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:15.116 13:05:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:15.116 13:05:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:15.116 13:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.116 13:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:15.116 ************************************ 00:11:15.116 START TEST event_perf 00:11:15.116 ************************************ 00:11:15.116 13:05:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:15.116 Running I/O for 1 seconds...[2024-12-06 13:05:21.485845] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:15.116 [2024-12-06 13:05:21.486218] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58177 ] 00:11:15.374 [2024-12-06 13:05:21.673040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.374 [2024-12-06 13:05:21.826603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.374 [2024-12-06 13:05:21.826760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.374 [2024-12-06 13:05:21.826916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.374 Running I/O for 1 seconds...[2024-12-06 13:05:21.827532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.750 00:11:16.750 lcore 0: 194629 00:11:16.750 lcore 1: 194628 00:11:16.750 lcore 2: 194627 00:11:16.750 lcore 3: 194628 00:11:16.750 done. 00:11:16.750 00:11:16.750 real 0m1.717s 00:11:16.750 user 0m4.446s 00:11:16.750 sys 0m0.139s 00:11:16.750 13:05:23 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.750 13:05:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:16.750 ************************************ 00:11:16.750 END TEST event_perf 00:11:16.750 ************************************ 00:11:16.750 13:05:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:16.750 13:05:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:16.750 13:05:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.750 13:05:23 event -- common/autotest_common.sh@10 -- # set +x 00:11:16.750 ************************************ 00:11:16.750 START TEST event_reactor 00:11:16.750 ************************************ 00:11:16.750 13:05:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:16.750 [2024-12-06 13:05:23.254777] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:16.750 [2024-12-06 13:05:23.255116] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58222 ] 00:11:17.009 [2024-12-06 13:05:23.436706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.280 [2024-12-06 13:05:23.588693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.656 test_start 00:11:18.656 oneshot 00:11:18.656 tick 100 00:11:18.656 tick 100 00:11:18.656 tick 250 00:11:18.656 tick 100 00:11:18.656 tick 100 00:11:18.656 tick 100 00:11:18.656 tick 250 00:11:18.656 tick 500 00:11:18.656 tick 100 00:11:18.656 tick 100 00:11:18.656 tick 250 00:11:18.656 tick 100 00:11:18.656 tick 100 00:11:18.656 test_end 00:11:18.656 00:11:18.656 real 0m1.622s 00:11:18.656 user 0m1.395s 00:11:18.656 sys 0m0.118s 00:11:18.656 ************************************ 00:11:18.656 END TEST event_reactor 00:11:18.656 13:05:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.656 13:05:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:18.656 ************************************ 00:11:18.656 13:05:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:18.656 13:05:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.656 13:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.656 13:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:11:18.656 ************************************ 00:11:18.656 START TEST event_reactor_perf 00:11:18.656 ************************************ 00:11:18.656 13:05:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:18.656 [2024-12-06 13:05:24.935312] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:18.656 [2024-12-06 13:05:24.935494] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58259 ] 00:11:18.656 [2024-12-06 13:05:25.108384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.914 [2024-12-06 13:05:25.252401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.290 test_start 00:11:20.290 test_end 00:11:20.290 Performance: 282569 events per second 00:11:20.290 ************************************ 00:11:20.290 END TEST event_reactor_perf 00:11:20.290 ************************************ 00:11:20.290 00:11:20.290 real 0m1.586s 00:11:20.290 user 0m1.375s 00:11:20.290 sys 0m0.101s 00:11:20.290 13:05:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.290 13:05:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:20.290 13:05:26 event -- event/event.sh@49 -- # uname -s 00:11:20.290 13:05:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:20.290 13:05:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:20.290 13:05:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.290 13:05:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.290 13:05:26 event -- common/autotest_common.sh@10 -- # set +x 00:11:20.290 ************************************ 00:11:20.290 START TEST event_scheduler 00:11:20.290 ************************************ 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:20.290 * Looking for test storage... 00:11:20.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.290 13:05:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.290 13:05:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.290 --rc genhtml_branch_coverage=1 00:11:20.290 --rc genhtml_function_coverage=1 00:11:20.290 --rc genhtml_legend=1 00:11:20.290 --rc geninfo_all_blocks=1 00:11:20.291 --rc geninfo_unexecuted_blocks=1 00:11:20.291 00:11:20.291 ' 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.291 --rc genhtml_branch_coverage=1 00:11:20.291 --rc genhtml_function_coverage=1 00:11:20.291 --rc genhtml_legend=1 00:11:20.291 --rc geninfo_all_blocks=1 00:11:20.291 --rc geninfo_unexecuted_blocks=1 00:11:20.291 00:11:20.291 ' 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.291 --rc genhtml_branch_coverage=1 00:11:20.291 --rc genhtml_function_coverage=1 00:11:20.291 --rc genhtml_legend=1 00:11:20.291 --rc geninfo_all_blocks=1 00:11:20.291 --rc geninfo_unexecuted_blocks=1 00:11:20.291 00:11:20.291 ' 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.291 --rc genhtml_branch_coverage=1 00:11:20.291 --rc genhtml_function_coverage=1 00:11:20.291 --rc genhtml_legend=1 00:11:20.291 --rc geninfo_all_blocks=1 00:11:20.291 --rc geninfo_unexecuted_blocks=1 00:11:20.291 00:11:20.291 ' 00:11:20.291 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:20.291 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58329 00:11:20.291 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:20.291 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:20.291 13:05:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58329 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58329 ']' 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.291 13:05:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:20.548 [2024-12-06 13:05:26.823203] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:20.548 [2024-12-06 13:05:26.823690] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58329 ] 00:11:20.548 [2024-12-06 13:05:27.011925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.807 [2024-12-06 13:05:27.199682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.807 [2024-12-06 13:05:27.199830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.807 [2024-12-06 13:05:27.199910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.807 [2024-12-06 13:05:27.199886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:21.375 13:05:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:21.375 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:21.375 POWER: Cannot set governor of lcore 0 to userspace 00:11:21.375 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:21.375 POWER: Cannot set governor of lcore 0 to performance 00:11:21.375 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:21.375 POWER: Cannot set governor of lcore 0 to userspace 00:11:21.375 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:21.375 POWER: Cannot set governor of lcore 0 to userspace 00:11:21.375 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:11:21.375 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:21.375 POWER: Unable to set Power Management Environment for lcore 0 00:11:21.375 [2024-12-06 13:05:27.884768] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:21.375 [2024-12-06 13:05:27.884908] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:21.375 [2024-12-06 13:05:27.885026] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:21.375 [2024-12-06 13:05:27.885196] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:21.375 [2024-12-06 13:05:27.885306] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:21.375 [2024-12-06 13:05:27.885410] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.375 13:05:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.375 13:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 [2024-12-06 13:05:28.260188] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:21.943 13:05:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:21.943 13:05:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.943 13:05:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 ************************************ 00:11:21.943 START TEST scheduler_create_thread 00:11:21.943 ************************************ 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 2 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 3 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 4 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 5 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 6 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 7 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.943 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 8 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.944 9 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.944 10 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.944 13:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 13:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:23.321 13:05:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:23.321 13:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 13:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:24.708 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.708 ************************************ 00:11:24.708 END TEST scheduler_create_thread 00:11:24.708 ************************************ 00:11:24.708 00:11:24.708 real 0m2.623s 00:11:24.708 user 0m0.016s 00:11:24.708 sys 0m0.008s 00:11:24.708 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.708 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:24.708 13:05:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:24.708 13:05:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58329 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58329 ']' 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58329 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58329 00:11:24.709 killing process with pid 58329 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58329' 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58329 00:11:24.709 13:05:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58329 00:11:24.969 [2024-12-06 13:05:31.376586] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:26.344 ************************************ 00:11:26.344 END TEST event_scheduler 00:11:26.344 ************************************ 00:11:26.344 00:11:26.344 real 0m5.943s 00:11:26.344 user 0m10.409s 00:11:26.344 sys 0m0.623s 00:11:26.344 13:05:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.344 13:05:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 13:05:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:26.344 13:05:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:26.344 13:05:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.344 13:05:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.344 13:05:32 event -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 ************************************ 00:11:26.344 START TEST app_repeat 00:11:26.344 ************************************ 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:26.344 Process app_repeat pid: 58446 00:11:26.344 spdk_app_start Round 0 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58446 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58446' 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:26.344 13:05:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:26.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.344 13:05:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 [2024-12-06 13:05:32.606706] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:26.344 [2024-12-06 13:05:32.607090] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58446 ] 00:11:26.344 [2024-12-06 13:05:32.786681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.654 [2024-12-06 13:05:32.957237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.654 [2024-12-06 13:05:32.957237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.238 13:05:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.238 13:05:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:27.238 13:05:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:27.495 Malloc0 00:11:27.495 13:05:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:28.061 Malloc1 00:11:28.061 13:05:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.061 13:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:28.061 /dev/nbd0 00:11:28.319 13:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.319 13:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:28.319 1+0 records in 00:11:28.319 1+0 records out 00:11:28.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442969 s, 9.2 MB/s 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.319 13:05:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:28.319 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.319 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.319 13:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:28.578 /dev/nbd1 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:28.578 1+0 records in 00:11:28.578 1+0 records out 00:11:28.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867409 s, 4.7 MB/s 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.578 13:05:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.578 13:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.836 13:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.836 { 00:11:28.836 "nbd_device": "/dev/nbd0", 00:11:28.836 "bdev_name": "Malloc0" 00:11:28.836 }, 00:11:28.836 { 00:11:28.836 "nbd_device": "/dev/nbd1", 00:11:28.836 "bdev_name": "Malloc1" 00:11:28.836 } 00:11:28.836 ]' 00:11:28.836 13:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.836 { 00:11:28.836 "nbd_device": "/dev/nbd0", 00:11:28.836 "bdev_name": "Malloc0" 00:11:28.836 }, 00:11:28.836 { 00:11:28.836 "nbd_device": "/dev/nbd1", 00:11:28.836 "bdev_name": "Malloc1" 00:11:28.836 } 00:11:28.836 ]' 00:11:28.836 13:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:29.095 /dev/nbd1' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:29.095 /dev/nbd1' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:29.095 256+0 records in 00:11:29.095 256+0 records out 00:11:29.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632732 s, 166 MB/s 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.095 256+0 records in 00:11:29.095 256+0 records out 00:11:29.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231494 s, 45.3 MB/s 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.095 256+0 records in 00:11:29.095 256+0 records out 00:11:29.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315331 s, 33.3 MB/s 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.095 13:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.353 13:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.633 13:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:29.892 13:05:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:29.892 13:05:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:30.460 13:05:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:31.837 [2024-12-06 13:05:38.061629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:31.837 [2024-12-06 13:05:38.206725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.837 [2024-12-06 13:05:38.206738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.095 [2024-12-06 13:05:38.418385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:32.095 [2024-12-06 13:05:38.418508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:33.470 spdk_app_start Round 1 00:11:33.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:33.470 13:05:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:33.470 13:05:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:33.470 13:05:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.470 13:05:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:33.729 13:05:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.729 13:05:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:33.729 13:05:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:34.294 Malloc0 00:11:34.294 13:05:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:34.552 Malloc1 00:11:34.552 13:05:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:34.552 13:05:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:34.811 /dev/nbd0 00:11:34.811 13:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:34.811 13:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:34.811 13:05:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:34.811 13:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:34.811 13:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:34.812 1+0 records in 00:11:34.812 1+0 records out 00:11:34.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405074 s, 10.1 MB/s 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:34.812 13:05:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:34.812 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.812 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:34.812 13:05:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:35.069 /dev/nbd1 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:35.069 1+0 records in 00:11:35.069 1+0 records out 00:11:35.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329337 s, 12.4 MB/s 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:35.069 13:05:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.069 13:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:35.326 13:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:35.326 { 00:11:35.326 "nbd_device": "/dev/nbd0", 00:11:35.326 "bdev_name": "Malloc0" 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "nbd_device": "/dev/nbd1", 00:11:35.326 "bdev_name": "Malloc1" 00:11:35.326 } 00:11:35.326 ]' 00:11:35.326 13:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:35.326 { 00:11:35.326 "nbd_device": "/dev/nbd0", 00:11:35.326 "bdev_name": "Malloc0" 00:11:35.326 }, 00:11:35.326 { 00:11:35.326 "nbd_device": "/dev/nbd1", 00:11:35.326 "bdev_name": "Malloc1" 00:11:35.326 } 00:11:35.326 ]' 00:11:35.326 13:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:35.585 /dev/nbd1' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:35.585 /dev/nbd1' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:35.585 256+0 records in 00:11:35.585 256+0 records out 00:11:35.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049149 s, 213 MB/s 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:35.585 256+0 records in 00:11:35.585 256+0 records out 00:11:35.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273868 s, 38.3 MB/s 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:35.585 256+0 records in 00:11:35.585 256+0 records out 00:11:35.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0386211 s, 27.2 MB/s 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.585 13:05:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.843 13:05:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.100 13:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.358 13:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.358 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.358 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:36.636 13:05:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:36.636 13:05:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:37.201 13:05:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:38.131 [2024-12-06 13:05:44.608204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:38.403 [2024-12-06 13:05:44.749842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.403 [2024-12-06 13:05:44.749854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.720 [2024-12-06 13:05:44.966944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:38.720 [2024-12-06 13:05:44.967030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:40.108 spdk_app_start Round 2 00:11:40.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:40.109 13:05:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:40.109 13:05:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:40.109 13:05:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.109 13:05:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:40.366 13:05:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.366 13:05:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:40.366 13:05:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:40.625 Malloc0 00:11:40.625 13:05:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:41.193 Malloc1 00:11:41.193 13:05:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.193 13:05:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:41.507 /dev/nbd0 00:11:41.507 13:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:41.507 13:05:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:41.507 1+0 records in 00:11:41.507 1+0 records out 00:11:41.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292569 s, 14.0 MB/s 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:41.507 13:05:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:41.507 13:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.507 13:05:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.507 13:05:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:41.765 /dev/nbd1 00:11:41.765 13:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:41.765 13:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:41.765 13:05:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:41.765 13:05:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:41.765 13:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:41.765 13:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:41.765 13:05:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:41.766 1+0 records in 00:11:41.766 1+0 records out 00:11:41.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323155 s, 12.7 MB/s 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:41.766 13:05:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:41.766 13:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.766 13:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.766 13:05:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:41.766 13:05:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:41.766 13:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:42.025 13:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:42.025 { 00:11:42.025 "nbd_device": "/dev/nbd0", 00:11:42.025 "bdev_name": "Malloc0" 00:11:42.025 }, 00:11:42.025 { 00:11:42.025 "nbd_device": "/dev/nbd1", 00:11:42.025 "bdev_name": "Malloc1" 00:11:42.025 } 00:11:42.025 ]' 00:11:42.025 13:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:42.025 { 00:11:42.025 "nbd_device": "/dev/nbd0", 00:11:42.025 "bdev_name": "Malloc0" 00:11:42.025 }, 00:11:42.025 { 00:11:42.025 "nbd_device": "/dev/nbd1", 00:11:42.025 "bdev_name": "Malloc1" 00:11:42.025 } 00:11:42.025 ]' 00:11:42.025 13:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:42.284 /dev/nbd1' 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:42.284 /dev/nbd1' 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:42.284 256+0 records in 00:11:42.284 256+0 records out 00:11:42.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107774 s, 97.3 MB/s 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:42.284 256+0 records in 00:11:42.284 256+0 records out 00:11:42.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302018 s, 34.7 MB/s 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:42.284 256+0 records in 00:11:42.284 256+0 records out 00:11:42.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291303 s, 36.0 MB/s 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:42.284 13:05:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.285 13:05:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.544 13:05:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.804 13:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:43.063 13:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:43.063 13:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:43.063 13:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:43.322 13:05:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:43.322 13:05:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:43.580 13:05:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:44.956 [2024-12-06 13:05:51.257612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:44.956 [2024-12-06 13:05:51.402317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.956 [2024-12-06 13:05:51.402332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.215 [2024-12-06 13:05:51.618263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:45.215 [2024-12-06 13:05:51.618339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:46.592 13:05:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58446 /var/tmp/spdk-nbd.sock 00:11:46.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.592 13:05:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:47.159 13:05:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58446 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58446 ']' 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58446 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58446 00:11:47.159 killing process with pid 58446 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58446' 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58446 00:11:47.159 13:05:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58446 00:11:48.093 spdk_app_start is called in Round 0. 00:11:48.093 Shutdown signal received, stop current app iteration 00:11:48.093 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:48.093 spdk_app_start is called in Round 1. 00:11:48.093 Shutdown signal received, stop current app iteration 00:11:48.093 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:48.093 spdk_app_start is called in Round 2. 00:11:48.093 Shutdown signal received, stop current app iteration 00:11:48.093 Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 reinitialization... 00:11:48.093 spdk_app_start is called in Round 3. 00:11:48.093 Shutdown signal received, stop current app iteration 00:11:48.093 13:05:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:48.093 13:05:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:48.093 00:11:48.093 real 0m21.881s 00:11:48.093 user 0m48.099s 00:11:48.093 sys 0m3.362s 00:11:48.093 13:05:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.093 ************************************ 00:11:48.093 END TEST app_repeat 00:11:48.093 ************************************ 00:11:48.093 13:05:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:48.093 13:05:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:48.093 13:05:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:48.093 13:05:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.093 13:05:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.093 13:05:54 event -- common/autotest_common.sh@10 -- # set +x 00:11:48.093 ************************************ 00:11:48.093 START TEST cpu_locks 00:11:48.093 ************************************ 00:11:48.093 13:05:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:48.093 * Looking for test storage... 00:11:48.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:48.093 13:05:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.093 13:05:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.093 13:05:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.352 13:05:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.352 13:05:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:48.352 13:05:54 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.352 13:05:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.352 --rc genhtml_branch_coverage=1 00:11:48.352 --rc genhtml_function_coverage=1 00:11:48.352 --rc genhtml_legend=1 00:11:48.352 --rc geninfo_all_blocks=1 00:11:48.352 --rc geninfo_unexecuted_blocks=1 00:11:48.352 00:11:48.352 ' 00:11:48.352 13:05:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.352 --rc genhtml_branch_coverage=1 00:11:48.352 --rc genhtml_function_coverage=1 00:11:48.352 --rc genhtml_legend=1 00:11:48.352 --rc geninfo_all_blocks=1 00:11:48.352 --rc geninfo_unexecuted_blocks=1 00:11:48.352 00:11:48.352 ' 00:11:48.352 13:05:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.352 --rc genhtml_branch_coverage=1 00:11:48.352 --rc genhtml_function_coverage=1 00:11:48.352 --rc genhtml_legend=1 00:11:48.352 --rc geninfo_all_blocks=1 00:11:48.352 --rc geninfo_unexecuted_blocks=1 00:11:48.352 00:11:48.352 ' 00:11:48.353 13:05:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.353 --rc genhtml_branch_coverage=1 00:11:48.353 --rc genhtml_function_coverage=1 00:11:48.353 --rc genhtml_legend=1 00:11:48.353 --rc geninfo_all_blocks=1 00:11:48.353 --rc geninfo_unexecuted_blocks=1 00:11:48.353 00:11:48.353 ' 00:11:48.353 13:05:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:48.353 13:05:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:48.353 13:05:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:48.353 13:05:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:48.353 13:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.353 13:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.353 13:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.353 ************************************ 00:11:48.353 START TEST default_locks 00:11:48.353 ************************************ 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58930 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58930 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.353 13:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.353 [2024-12-06 13:05:54.809732] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:48.353 [2024-12-06 13:05:54.810172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:11:48.614 [2024-12-06 13:05:54.985782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.614 [2024-12-06 13:05:55.135138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.987 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.987 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:49.987 13:05:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58930 00:11:49.987 13:05:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58930 00:11:49.987 13:05:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58930 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58930 ']' 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58930 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58930 00:11:50.245 killing process with pid 58930 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58930' 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58930 00:11:50.245 13:05:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58930 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58930 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58930 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:52.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58930 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.768 ERROR: process (pid: 58930) is no longer running 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:52.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58930) - No such process 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:52.768 00:11:52.768 real 0m4.352s 00:11:52.768 user 0m4.352s 00:11:52.768 sys 0m0.843s 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.768 ************************************ 00:11:52.768 END TEST default_locks 00:11:52.768 ************************************ 00:11:52.768 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:52.768 13:05:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:52.768 13:05:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.768 13:05:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.768 13:05:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:52.768 ************************************ 00:11:52.768 START TEST default_locks_via_rpc 00:11:52.768 ************************************ 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59005 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59005 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59005 ']' 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.768 13:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.768 [2024-12-06 13:05:59.210000] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:52.768 [2024-12-06 13:05:59.210206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59005 ] 00:11:53.027 [2024-12-06 13:05:59.395728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.027 [2024-12-06 13:05:59.549495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59005 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59005 00:11:54.401 13:06:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59005 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59005 ']' 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59005 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59005 00:11:54.658 killing process with pid 59005 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59005' 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59005 00:11:54.658 13:06:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59005 00:11:57.185 ************************************ 00:11:57.185 END TEST default_locks_via_rpc 00:11:57.185 ************************************ 00:11:57.185 00:11:57.185 real 0m4.428s 00:11:57.185 user 0m4.337s 00:11:57.185 sys 0m0.908s 00:11:57.185 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.185 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 13:06:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:57.185 13:06:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:57.185 13:06:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.185 13:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 ************************************ 00:11:57.185 START TEST non_locking_app_on_locked_coremask 00:11:57.185 ************************************ 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59079 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59079 /var/tmp/spdk.sock 00:11:57.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59079 ']' 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.185 13:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:57.185 [2024-12-06 13:06:03.699681] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:57.185 [2024-12-06 13:06:03.699896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ] 00:11:57.443 [2024-12-06 13:06:03.885174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.702 [2024-12-06 13:06:04.036552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59106 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59106 /var/tmp/spdk2.sock 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59106 ']' 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.659 13:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:58.659 [2024-12-06 13:06:05.181413] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:11:58.659 [2024-12-06 13:06:05.181947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59106 ] 00:11:58.917 [2024-12-06 13:06:05.389368] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:58.917 [2024-12-06 13:06:05.389498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.176 [2024-12-06 13:06:05.697106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.708 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.708 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:01.708 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59079 00:12:01.708 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:01.708 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59079 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59079 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59079 ']' 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59079 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59079 00:12:02.643 killing process with pid 59079 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59079' 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59079 00:12:02.643 13:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59079 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59106 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59106 ']' 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59106 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59106 00:12:07.946 killing process with pid 59106 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59106' 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59106 00:12:07.946 13:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59106 00:12:09.851 ************************************ 00:12:09.851 END TEST non_locking_app_on_locked_coremask 00:12:09.851 ************************************ 00:12:09.851 00:12:09.851 real 0m12.743s 00:12:09.851 user 0m13.107s 00:12:09.851 sys 0m1.876s 00:12:09.851 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.851 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:09.851 13:06:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:09.851 13:06:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.851 13:06:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.851 13:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:09.851 ************************************ 00:12:09.851 START TEST locking_app_on_unlocked_coremask 00:12:09.851 ************************************ 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59264 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59264 /var/tmp/spdk.sock 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59264 ']' 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.851 13:06:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:10.110 [2024-12-06 13:06:16.508896] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:10.110 [2024-12-06 13:06:16.509085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:12:10.368 [2024-12-06 13:06:16.702526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:10.369 [2024-12-06 13:06:16.702650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.369 [2024-12-06 13:06:16.875056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59281 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59281 /var/tmp/spdk2.sock 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59281 ']' 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:11.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.322 13:06:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:11.582 [2024-12-06 13:06:17.957994] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:11.582 [2024-12-06 13:06:17.958487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:12:11.844 [2024-12-06 13:06:18.150126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.103 [2024-12-06 13:06:18.448078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.636 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.636 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:14.636 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59281 00:12:14.636 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59281 00:12:14.636 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59264 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59264 ']' 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59264 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59264 00:12:15.226 killing process with pid 59264 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59264' 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59264 00:12:15.226 13:06:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59264 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59281 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59281 ']' 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59281 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59281 00:12:20.539 killing process with pid 59281 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59281' 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59281 00:12:20.539 13:06:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59281 00:12:23.820 00:12:23.820 real 0m13.398s 00:12:23.820 user 0m13.678s 00:12:23.820 sys 0m1.894s 00:12:23.820 13:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.820 ************************************ 00:12:23.820 END TEST locking_app_on_unlocked_coremask 00:12:23.820 13:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:23.820 ************************************ 00:12:23.820 13:06:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:23.820 13:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.820 13:06:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.820 13:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:23.820 ************************************ 00:12:23.820 START TEST locking_app_on_locked_coremask 00:12:23.820 ************************************ 00:12:23.820 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59446 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59446 /var/tmp/spdk.sock 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.821 13:06:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:23.821 [2024-12-06 13:06:29.952093] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:23.821 [2024-12-06 13:06:29.952305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:12:23.821 [2024-12-06 13:06:30.134806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.821 [2024-12-06 13:06:30.305637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59467 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59467 /var/tmp/spdk2.sock 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59467 /var/tmp/spdk2.sock 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59467 /var/tmp/spdk2.sock 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59467 ']' 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:25.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.200 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:25.200 [2024-12-06 13:06:31.543469] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:25.200 [2024-12-06 13:06:31.543762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59467 ] 00:12:25.459 [2024-12-06 13:06:31.756033] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59446 has claimed it. 00:12:25.459 [2024-12-06 13:06:31.756173] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:25.717 ERROR: process (pid: 59467) is no longer running 00:12:25.717 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59467) - No such process 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59446 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59446 00:12:25.717 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59446 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59446 ']' 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59446 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59446 00:12:26.286 killing process with pid 59446 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59446' 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59446 00:12:26.286 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59446 00:12:28.846 ************************************ 00:12:28.846 END TEST locking_app_on_locked_coremask 00:12:28.846 ************************************ 00:12:28.846 00:12:28.846 real 0m5.169s 00:12:28.846 user 0m5.436s 00:12:28.846 sys 0m1.105s 00:12:28.846 13:06:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.846 13:06:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.846 13:06:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:28.846 13:06:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.846 13:06:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.846 13:06:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:28.846 ************************************ 00:12:28.846 START TEST locking_overlapped_coremask 00:12:28.846 ************************************ 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59537 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59537 /var/tmp/spdk.sock 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59537 ']' 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.846 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.847 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.847 13:06:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:28.847 [2024-12-06 13:06:35.199286] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:28.847 [2024-12-06 13:06:35.199535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59537 ] 00:12:29.103 [2024-12-06 13:06:35.377360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.103 [2024-12-06 13:06:35.522234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.103 [2024-12-06 13:06:35.522340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.103 [2024-12-06 13:06:35.522379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59560 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59560 /var/tmp/spdk2.sock 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59560 /var/tmp/spdk2.sock 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:30.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59560 /var/tmp/spdk2.sock 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59560 ']' 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.034 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:30.291 [2024-12-06 13:06:36.616624] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:30.291 [2024-12-06 13:06:36.616811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:12:30.549 [2024-12-06 13:06:36.819259] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59537 has claimed it. 00:12:30.549 [2024-12-06 13:06:36.819416] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:30.808 ERROR: process (pid: 59560) is no longer running 00:12:30.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59560) - No such process 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59537 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59537 ']' 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59537 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59537 00:12:30.808 killing process with pid 59537 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59537' 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59537 00:12:30.808 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59537 00:12:33.342 00:12:33.342 real 0m4.643s 00:12:33.342 user 0m12.605s 00:12:33.342 sys 0m0.806s 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:33.342 ************************************ 00:12:33.342 END TEST locking_overlapped_coremask 00:12:33.342 ************************************ 00:12:33.342 13:06:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:33.342 13:06:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.342 13:06:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.342 13:06:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:33.342 ************************************ 00:12:33.342 START TEST locking_overlapped_coremask_via_rpc 00:12:33.342 ************************************ 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:33.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59630 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59630 /var/tmp/spdk.sock 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59630 ']' 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.342 13:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.601 [2024-12-06 13:06:39.881390] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:33.601 [2024-12-06 13:06:39.881823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:12:33.601 [2024-12-06 13:06:40.083636] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:33.601 [2024-12-06 13:06:40.083714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.859 [2024-12-06 13:06:40.230696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.859 [2024-12-06 13:06:40.230843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.859 [2024-12-06 13:06:40.230852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59648 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:34.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.794 13:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.052 [2024-12-06 13:06:41.337870] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:35.053 [2024-12-06 13:06:41.338072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:12:35.053 [2024-12-06 13:06:41.542743] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:35.053 [2024-12-06 13:06:41.542836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.620 [2024-12-06 13:06:41.872563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.620 [2024-12-06 13:06:41.872683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.620 [2024-12-06 13:06:41.872701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.179 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.180 [2024-12-06 13:06:44.189787] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59630 has claimed it. 00:12:38.180 request: 00:12:38.180 { 00:12:38.180 "method": "framework_enable_cpumask_locks", 00:12:38.180 "req_id": 1 00:12:38.180 } 00:12:38.180 Got JSON-RPC error response 00:12:38.180 response: 00:12:38.180 { 00:12:38.180 "code": -32603, 00:12:38.180 "message": "Failed to claim CPU core: 2" 00:12:38.180 } 00:12:38.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59630 /var/tmp/spdk.sock 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59630 ']' 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59648 ']' 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:38.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.180 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:38.439 00:12:38.439 real 0m5.083s 00:12:38.439 user 0m1.909s 00:12:38.439 sys 0m0.249s 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.439 13:06:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.439 ************************************ 00:12:38.439 END TEST locking_overlapped_coremask_via_rpc 00:12:38.439 ************************************ 00:12:38.439 13:06:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:38.439 13:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59630 ]] 00:12:38.439 13:06:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59630 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59630 ']' 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59630 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59630 00:12:38.439 killing process with pid 59630 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59630' 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59630 00:12:38.439 13:06:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59630 00:12:41.016 13:06:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:12:41.016 13:06:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59648 00:12:41.016 killing process with pid 59648 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59648' 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59648 00:12:41.016 13:06:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59648 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59630 ]] 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59630 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59630 ']' 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59630 00:12:43.543 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59630) - No such process 00:12:43.543 Process with pid 59630 is not found 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59630 is not found' 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59648 ']' 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59648 00:12:43.543 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59648) - No such process 00:12:43.543 Process with pid 59648 is not found 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59648 is not found' 00:12:43.543 13:06:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:43.543 ************************************ 00:12:43.543 END TEST cpu_locks 00:12:43.543 ************************************ 00:12:43.543 00:12:43.543 real 0m55.297s 00:12:43.543 user 1m33.478s 00:12:43.543 sys 0m9.177s 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.543 13:06:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:43.543 ************************************ 00:12:43.543 END TEST event 00:12:43.543 ************************************ 00:12:43.543 00:12:43.543 real 1m28.609s 00:12:43.543 user 2m39.429s 00:12:43.543 sys 0m13.817s 00:12:43.543 13:06:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.543 13:06:49 event -- common/autotest_common.sh@10 -- # set +x 00:12:43.543 13:06:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:43.543 13:06:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.543 13:06:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.543 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.543 ************************************ 00:12:43.543 START TEST thread 00:12:43.543 ************************************ 00:12:43.543 13:06:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:43.543 * Looking for test storage... 00:12:43.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:43.543 13:06:49 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:43.543 13:06:49 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:12:43.543 13:06:49 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:43.543 13:06:50 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:43.543 13:06:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.543 13:06:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.543 13:06:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.543 13:06:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.543 13:06:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.543 13:06:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.543 13:06:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.543 13:06:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.543 13:06:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.543 13:06:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.543 13:06:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.543 13:06:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:43.543 13:06:50 thread -- scripts/common.sh@345 -- # : 1 00:12:43.543 13:06:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.543 13:06:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.543 13:06:50 thread -- scripts/common.sh@365 -- # decimal 1 00:12:43.543 13:06:50 thread -- scripts/common.sh@353 -- # local d=1 00:12:43.543 13:06:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.543 13:06:50 thread -- scripts/common.sh@355 -- # echo 1 00:12:43.543 13:06:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.543 13:06:50 thread -- scripts/common.sh@366 -- # decimal 2 00:12:43.543 13:06:50 thread -- scripts/common.sh@353 -- # local d=2 00:12:43.543 13:06:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.543 13:06:50 thread -- scripts/common.sh@355 -- # echo 2 00:12:43.543 13:06:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.543 13:06:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.543 13:06:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.543 13:06:50 thread -- scripts/common.sh@368 -- # return 0 00:12:43.543 13:06:50 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.543 13:06:50 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:43.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.543 --rc genhtml_branch_coverage=1 00:12:43.543 --rc genhtml_function_coverage=1 00:12:43.543 --rc genhtml_legend=1 00:12:43.543 --rc geninfo_all_blocks=1 00:12:43.543 --rc geninfo_unexecuted_blocks=1 00:12:43.543 00:12:43.543 ' 00:12:43.543 13:06:50 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:43.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.543 --rc genhtml_branch_coverage=1 00:12:43.543 --rc genhtml_function_coverage=1 00:12:43.543 --rc genhtml_legend=1 00:12:43.543 --rc geninfo_all_blocks=1 00:12:43.543 --rc geninfo_unexecuted_blocks=1 00:12:43.543 00:12:43.543 ' 00:12:43.543 13:06:50 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:43.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.543 --rc genhtml_branch_coverage=1 00:12:43.543 --rc genhtml_function_coverage=1 00:12:43.543 --rc genhtml_legend=1 00:12:43.543 --rc geninfo_all_blocks=1 00:12:43.543 --rc geninfo_unexecuted_blocks=1 00:12:43.544 00:12:43.544 ' 00:12:43.544 13:06:50 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:43.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.544 --rc genhtml_branch_coverage=1 00:12:43.544 --rc genhtml_function_coverage=1 00:12:43.544 --rc genhtml_legend=1 00:12:43.544 --rc geninfo_all_blocks=1 00:12:43.544 --rc geninfo_unexecuted_blocks=1 00:12:43.544 00:12:43.544 ' 00:12:43.544 13:06:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:43.544 13:06:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:43.544 13:06:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.544 13:06:50 thread -- common/autotest_common.sh@10 -- # set +x 00:12:43.802 ************************************ 00:12:43.802 START TEST thread_poller_perf 00:12:43.802 ************************************ 00:12:43.802 13:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:43.802 [2024-12-06 13:06:50.119958] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:43.802 [2024-12-06 13:06:50.120145] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:12:43.802 [2024-12-06 13:06:50.310740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.061 [2024-12-06 13:06:50.471035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.061 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:45.465 [2024-12-06T13:06:51.994Z] ====================================== 00:12:45.465 [2024-12-06T13:06:51.994Z] busy:2211984211 (cyc) 00:12:45.465 [2024-12-06T13:06:51.994Z] total_run_count: 292000 00:12:45.465 [2024-12-06T13:06:51.994Z] tsc_hz: 2200000000 (cyc) 00:12:45.465 [2024-12-06T13:06:51.994Z] ====================================== 00:12:45.465 [2024-12-06T13:06:51.994Z] poller_cost: 7575 (cyc), 3443 (nsec) 00:12:45.465 00:12:45.465 real 0m1.645s 00:12:45.465 user 0m1.425s 00:12:45.465 sys 0m0.110s 00:12:45.465 13:06:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.465 13:06:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:45.465 ************************************ 00:12:45.465 END TEST thread_poller_perf 00:12:45.465 ************************************ 00:12:45.465 13:06:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:45.465 13:06:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:45.465 13:06:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.465 13:06:51 thread -- common/autotest_common.sh@10 -- # set +x 00:12:45.465 ************************************ 00:12:45.465 START TEST thread_poller_perf 00:12:45.465 ************************************ 00:12:45.465 13:06:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:45.465 [2024-12-06 13:06:51.822260] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:45.465 [2024-12-06 13:06:51.822942] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59885 ] 00:12:45.723 [2024-12-06 13:06:51.999269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.723 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:45.723 [2024-12-06 13:06:52.133092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.098 [2024-12-06T13:06:53.627Z] ====================================== 00:12:47.098 [2024-12-06T13:06:53.627Z] busy:2204651250 (cyc) 00:12:47.098 [2024-12-06T13:06:53.627Z] total_run_count: 3453000 00:12:47.098 [2024-12-06T13:06:53.627Z] tsc_hz: 2200000000 (cyc) 00:12:47.098 [2024-12-06T13:06:53.627Z] ====================================== 00:12:47.098 [2024-12-06T13:06:53.627Z] poller_cost: 638 (cyc), 290 (nsec) 00:12:47.098 00:12:47.098 real 0m1.624s 00:12:47.098 user 0m1.412s 00:12:47.098 sys 0m0.100s 00:12:47.098 13:06:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.098 13:06:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 ************************************ 00:12:47.098 END TEST thread_poller_perf 00:12:47.098 ************************************ 00:12:47.098 13:06:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:47.098 ************************************ 00:12:47.098 END TEST thread 00:12:47.098 ************************************ 00:12:47.098 00:12:47.098 real 0m3.575s 00:12:47.098 user 0m2.970s 00:12:47.098 sys 0m0.372s 00:12:47.098 13:06:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.098 13:06:53 thread -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 13:06:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:47.098 13:06:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:47.098 13:06:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.098 13:06:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.098 13:06:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.098 ************************************ 00:12:47.098 START TEST app_cmdline 00:12:47.098 ************************************ 00:12:47.098 13:06:53 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:47.098 * Looking for test storage... 00:12:47.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:47.098 13:06:53 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.098 13:06:53 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.098 13:06:53 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.356 13:06:53 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:47.356 13:06:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.357 13:06:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.357 13:06:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.357 13:06:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.357 --rc genhtml_branch_coverage=1 00:12:47.357 --rc genhtml_function_coverage=1 00:12:47.357 --rc genhtml_legend=1 00:12:47.357 --rc geninfo_all_blocks=1 00:12:47.357 --rc geninfo_unexecuted_blocks=1 00:12:47.357 00:12:47.357 ' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.357 --rc genhtml_branch_coverage=1 00:12:47.357 --rc genhtml_function_coverage=1 00:12:47.357 --rc genhtml_legend=1 00:12:47.357 --rc geninfo_all_blocks=1 00:12:47.357 --rc geninfo_unexecuted_blocks=1 00:12:47.357 00:12:47.357 ' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.357 --rc genhtml_branch_coverage=1 00:12:47.357 --rc genhtml_function_coverage=1 00:12:47.357 --rc genhtml_legend=1 00:12:47.357 --rc geninfo_all_blocks=1 00:12:47.357 --rc geninfo_unexecuted_blocks=1 00:12:47.357 00:12:47.357 ' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.357 --rc genhtml_branch_coverage=1 00:12:47.357 --rc genhtml_function_coverage=1 00:12:47.357 --rc genhtml_legend=1 00:12:47.357 --rc geninfo_all_blocks=1 00:12:47.357 --rc geninfo_unexecuted_blocks=1 00:12:47.357 00:12:47.357 ' 00:12:47.357 13:06:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:47.357 13:06:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59974 00:12:47.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.357 13:06:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59974 00:12:47.357 13:06:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59974 ']' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.357 13:06:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:47.357 [2024-12-06 13:06:53.873561] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:47.357 [2024-12-06 13:06:53.874871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:12:47.614 [2024-12-06 13:06:54.075057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.872 [2024-12-06 13:06:54.250633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.808 13:06:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.808 13:06:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:48.808 13:06:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:49.065 { 00:12:49.065 "version": "SPDK v25.01-pre git sha1 cf089b398", 00:12:49.065 "fields": { 00:12:49.065 "major": 25, 00:12:49.065 "minor": 1, 00:12:49.065 "patch": 0, 00:12:49.065 "suffix": "-pre", 00:12:49.065 "commit": "cf089b398" 00:12:49.065 } 00:12:49.065 } 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:49.065 13:06:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:49.065 13:06:55 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:49.631 request: 00:12:49.631 { 00:12:49.631 "method": "env_dpdk_get_mem_stats", 00:12:49.631 "req_id": 1 00:12:49.631 } 00:12:49.631 Got JSON-RPC error response 00:12:49.631 response: 00:12:49.631 { 00:12:49.631 "code": -32601, 00:12:49.631 "message": "Method not found" 00:12:49.631 } 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.631 13:06:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59974 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59974 ']' 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59974 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59974 00:12:49.631 killing process with pid 59974 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59974' 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 59974 00:12:49.631 13:06:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 59974 00:12:52.161 00:12:52.161 real 0m4.740s 00:12:52.161 user 0m5.155s 00:12:52.161 sys 0m0.801s 00:12:52.161 13:06:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.161 13:06:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:52.161 ************************************ 00:12:52.161 END TEST app_cmdline 00:12:52.161 ************************************ 00:12:52.161 13:06:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:52.161 13:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.161 13:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.161 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:52.162 ************************************ 00:12:52.162 START TEST version 00:12:52.162 ************************************ 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:52.162 * Looking for test storage... 00:12:52.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.162 13:06:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.162 13:06:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.162 13:06:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.162 13:06:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.162 13:06:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.162 13:06:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.162 13:06:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.162 13:06:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.162 13:06:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.162 13:06:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.162 13:06:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.162 13:06:58 version -- scripts/common.sh@344 -- # case "$op" in 00:12:52.162 13:06:58 version -- scripts/common.sh@345 -- # : 1 00:12:52.162 13:06:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.162 13:06:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.162 13:06:58 version -- scripts/common.sh@365 -- # decimal 1 00:12:52.162 13:06:58 version -- scripts/common.sh@353 -- # local d=1 00:12:52.162 13:06:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.162 13:06:58 version -- scripts/common.sh@355 -- # echo 1 00:12:52.162 13:06:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.162 13:06:58 version -- scripts/common.sh@366 -- # decimal 2 00:12:52.162 13:06:58 version -- scripts/common.sh@353 -- # local d=2 00:12:52.162 13:06:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.162 13:06:58 version -- scripts/common.sh@355 -- # echo 2 00:12:52.162 13:06:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.162 13:06:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.162 13:06:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.162 13:06:58 version -- scripts/common.sh@368 -- # return 0 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.162 --rc genhtml_branch_coverage=1 00:12:52.162 --rc genhtml_function_coverage=1 00:12:52.162 --rc genhtml_legend=1 00:12:52.162 --rc geninfo_all_blocks=1 00:12:52.162 --rc geninfo_unexecuted_blocks=1 00:12:52.162 00:12:52.162 ' 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.162 --rc genhtml_branch_coverage=1 00:12:52.162 --rc genhtml_function_coverage=1 00:12:52.162 --rc genhtml_legend=1 00:12:52.162 --rc geninfo_all_blocks=1 00:12:52.162 --rc geninfo_unexecuted_blocks=1 00:12:52.162 00:12:52.162 ' 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.162 --rc genhtml_branch_coverage=1 00:12:52.162 --rc genhtml_function_coverage=1 00:12:52.162 --rc genhtml_legend=1 00:12:52.162 --rc geninfo_all_blocks=1 00:12:52.162 --rc geninfo_unexecuted_blocks=1 00:12:52.162 00:12:52.162 ' 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.162 --rc genhtml_branch_coverage=1 00:12:52.162 --rc genhtml_function_coverage=1 00:12:52.162 --rc genhtml_legend=1 00:12:52.162 --rc geninfo_all_blocks=1 00:12:52.162 --rc geninfo_unexecuted_blocks=1 00:12:52.162 00:12:52.162 ' 00:12:52.162 13:06:58 version -- app/version.sh@17 -- # get_header_version major 00:12:52.162 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:52.162 13:06:58 version -- app/version.sh@17 -- # major=25 00:12:52.162 13:06:58 version -- app/version.sh@18 -- # get_header_version minor 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:52.162 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:52.162 13:06:58 version -- app/version.sh@18 -- # minor=1 00:12:52.162 13:06:58 version -- app/version.sh@19 -- # get_header_version patch 00:12:52.162 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:52.162 13:06:58 version -- app/version.sh@19 -- # patch=0 00:12:52.162 13:06:58 version -- app/version.sh@20 -- # get_header_version suffix 00:12:52.162 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:52.162 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:52.162 13:06:58 version -- app/version.sh@20 -- # suffix=-pre 00:12:52.162 13:06:58 version -- app/version.sh@22 -- # version=25.1 00:12:52.162 13:06:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:52.162 13:06:58 version -- app/version.sh@28 -- # version=25.1rc0 00:12:52.162 13:06:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:52.162 13:06:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:52.162 13:06:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:52.162 13:06:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:52.162 00:12:52.162 real 0m0.260s 00:12:52.162 user 0m0.175s 00:12:52.162 sys 0m0.121s 00:12:52.162 ************************************ 00:12:52.162 END TEST version 00:12:52.162 ************************************ 00:12:52.162 13:06:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.162 13:06:58 version -- common/autotest_common.sh@10 -- # set +x 00:12:52.162 13:06:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:52.162 13:06:58 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:12:52.162 13:06:58 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:52.162 13:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.162 13:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.162 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:52.162 ************************************ 00:12:52.162 START TEST bdev_raid 00:12:52.162 ************************************ 00:12:52.162 13:06:58 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:52.420 * Looking for test storage... 00:12:52.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:52.420 13:06:58 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@345 -- # : 1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.421 13:06:58 bdev_raid -- scripts/common.sh@368 -- # return 0 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.421 --rc genhtml_branch_coverage=1 00:12:52.421 --rc genhtml_function_coverage=1 00:12:52.421 --rc genhtml_legend=1 00:12:52.421 --rc geninfo_all_blocks=1 00:12:52.421 --rc geninfo_unexecuted_blocks=1 00:12:52.421 00:12:52.421 ' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.421 --rc genhtml_branch_coverage=1 00:12:52.421 --rc genhtml_function_coverage=1 00:12:52.421 --rc genhtml_legend=1 00:12:52.421 --rc geninfo_all_blocks=1 00:12:52.421 --rc geninfo_unexecuted_blocks=1 00:12:52.421 00:12:52.421 ' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.421 --rc genhtml_branch_coverage=1 00:12:52.421 --rc genhtml_function_coverage=1 00:12:52.421 --rc genhtml_legend=1 00:12:52.421 --rc geninfo_all_blocks=1 00:12:52.421 --rc geninfo_unexecuted_blocks=1 00:12:52.421 00:12:52.421 ' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.421 --rc genhtml_branch_coverage=1 00:12:52.421 --rc genhtml_function_coverage=1 00:12:52.421 --rc genhtml_legend=1 00:12:52.421 --rc geninfo_all_blocks=1 00:12:52.421 --rc geninfo_unexecuted_blocks=1 00:12:52.421 00:12:52.421 ' 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:52.421 13:06:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:12:52.421 13:06:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.421 13:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.421 ************************************ 00:12:52.421 START TEST raid1_resize_data_offset_test 00:12:52.421 ************************************ 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60167 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60167' 00:12:52.421 Process raid pid: 60167 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60167 00:12:52.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60167 ']' 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.421 13:06:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.679 [2024-12-06 13:06:58.960356] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:52.679 [2024-12-06 13:06:58.960559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.679 [2024-12-06 13:06:59.149149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.936 [2024-12-06 13:06:59.299218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.223 [2024-12-06 13:06:59.529587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.223 [2024-12-06 13:06:59.529665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.511 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.511 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:12:53.511 13:06:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:12:53.511 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.511 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.809 malloc0 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.809 malloc1 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.809 null0 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.809 [2024-12-06 13:07:00.202084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:12:53.809 [2024-12-06 13:07:00.204983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:53.809 [2024-12-06 13:07:00.205256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:12:53.809 [2024-12-06 13:07:00.205543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.809 [2024-12-06 13:07:00.205568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:12:53.809 [2024-12-06 13:07:00.206032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:53.809 [2024-12-06 13:07:00.206345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.809 [2024-12-06 13:07:00.206370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:53.809 [2024-12-06 13:07:00.206703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.809 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.810 [2024-12-06 13:07:00.270774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.810 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.374 malloc2 00:12:54.374 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.374 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:12:54.374 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.374 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.374 [2024-12-06 13:07:00.884225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:54.632 [2024-12-06 13:07:00.903031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.632 [2024-12-06 13:07:00.905784] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60167 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60167 ']' 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60167 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60167 00:12:54.632 killing process with pid 60167 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60167' 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60167 00:12:54.632 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60167 00:12:54.632 [2024-12-06 13:07:00.990706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.632 [2024-12-06 13:07:00.993431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:12:54.632 [2024-12-06 13:07:00.993706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.632 [2024-12-06 13:07:00.993869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:12:54.632 [2024-12-06 13:07:01.026282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.632 [2024-12-06 13:07:01.026940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.632 [2024-12-06 13:07:01.027139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:56.528 [2024-12-06 13:07:02.816483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.464 13:07:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:12:57.464 00:12:57.464 real 0m5.130s 00:12:57.464 user 0m4.965s 00:12:57.464 sys 0m0.813s 00:12:57.464 13:07:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.464 13:07:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.464 ************************************ 00:12:57.464 END TEST raid1_resize_data_offset_test 00:12:57.464 ************************************ 00:12:57.804 13:07:04 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:12:57.804 13:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.804 13:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.804 13:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.804 ************************************ 00:12:57.804 START TEST raid0_resize_superblock_test 00:12:57.804 ************************************ 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:12:57.804 Process raid pid: 60256 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60256 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60256' 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60256 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60256 ']' 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.804 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.804 [2024-12-06 13:07:04.185870] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:12:57.804 [2024-12-06 13:07:04.187407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.064 [2024-12-06 13:07:04.386771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.064 [2024-12-06 13:07:04.566263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.323 [2024-12-06 13:07:04.783495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.323 [2024-12-06 13:07:04.783566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.889 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.889 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.889 13:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:58.889 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.889 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 malloc0 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 [2024-12-06 13:07:05.821682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:59.456 [2024-12-06 13:07:05.821756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.456 [2024-12-06 13:07:05.821803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.456 [2024-12-06 13:07:05.821845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.456 [2024-12-06 13:07:05.824838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.456 [2024-12-06 13:07:05.824883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:59.456 pt0 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.456 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.715 c9e29f4d-10ad-48da-bb06-fb721429ff8c 00:12:59.715 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.715 13:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:59.715 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.715 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.715 4bfe4c47-4f98-4595-a03e-c1067cc46525 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.715 4a47ed9f-3b52-4654-b5a8-b56242091e0a 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.715 [2024-12-06 13:07:06.015941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4bfe4c47-4f98-4595-a03e-c1067cc46525 is claimed 00:12:59.715 [2024-12-06 13:07:06.016047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a47ed9f-3b52-4654-b5a8-b56242091e0a is claimed 00:12:59.715 [2024-12-06 13:07:06.016213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.715 [2024-12-06 13:07:06.016236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:59.715 [2024-12-06 13:07:06.016591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:59.715 [2024-12-06 13:07:06.016841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.715 [2024-12-06 13:07:06.016856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:59.715 [2024-12-06 13:07:06.017036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:59.715 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 [2024-12-06 13:07:06.120283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 [2024-12-06 13:07:06.168311] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:59.716 [2024-12-06 13:07:06.168343] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4bfe4c47-4f98-4595-a03e-c1067cc46525' was resized: old size 131072, new size 204800 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 [2024-12-06 13:07:06.176083] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:59.716 [2024-12-06 13:07:06.176108] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4a47ed9f-3b52-4654-b5a8-b56242091e0a' was resized: old size 131072, new size 204800 00:12:59.716 [2024-12-06 13:07:06.176142] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.716 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 [2024-12-06 13:07:06.292418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 [2024-12-06 13:07:06.344174] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:59.975 [2024-12-06 13:07:06.344276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:59.975 [2024-12-06 13:07:06.344300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.975 [2024-12-06 13:07:06.344317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:59.975 [2024-12-06 13:07:06.344514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.975 [2024-12-06 13:07:06.344573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.975 [2024-12-06 13:07:06.344594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 [2024-12-06 13:07:06.352040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:59.975 [2024-12-06 13:07:06.352254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.975 [2024-12-06 13:07:06.352390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:59.975 [2024-12-06 13:07:06.352661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.975 [2024-12-06 13:07:06.356113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.975 [2024-12-06 13:07:06.356160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:59.975 pt0 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.975 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.975 [2024-12-06 13:07:06.358756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4bfe4c47-4f98-4595-a03e-c1067cc46525 00:12:59.976 [2024-12-06 13:07:06.359027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4bfe4c47-4f98-4595-a03e-c1067cc46525 is claimed 00:12:59.976 [2024-12-06 13:07:06.359182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4a47ed9f-3b52-4654-b5a8-b56242091e0a 00:12:59.976 [2024-12-06 13:07:06.359217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a47ed9f-3b52-4654-b5a8-b56242091e0a is claimed 00:12:59.976 [2024-12-06 13:07:06.359410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4a47ed9f-3b52-4654-b5a8-b56242091e0a (2) smaller than existing raid bdev Raid (3) 00:12:59.976 [2024-12-06 13:07:06.359446] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4bfe4c47-4f98-4595-a03e-c1067cc46525: File exists 00:12:59.976 [2024-12-06 13:07:06.359536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:59.976 [2024-12-06 13:07:06.359558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:59.976 [2024-12-06 13:07:06.359900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:59.976 [2024-12-06 13:07:06.360126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:59.976 [2024-12-06 13:07:06.360141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:59.976 [2024-12-06 13:07:06.360555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.976 [2024-12-06 13:07:06.372715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60256 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60256 ']' 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60256 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60256 00:12:59.976 killing process with pid 60256 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60256' 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60256 00:12:59.976 [2024-12-06 13:07:06.449018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.976 [2024-12-06 13:07:06.449100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.976 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60256 00:12:59.976 [2024-12-06 13:07:06.449162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.976 [2024-12-06 13:07:06.449177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:01.350 [2024-12-06 13:07:07.751893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.725 13:07:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:02.725 00:13:02.725 real 0m4.845s 00:13:02.725 user 0m5.097s 00:13:02.725 sys 0m0.798s 00:13:02.725 13:07:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.725 ************************************ 00:13:02.725 END TEST raid0_resize_superblock_test 00:13:02.725 ************************************ 00:13:02.725 13:07:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.725 13:07:08 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:02.725 13:07:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.725 13:07:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.725 13:07:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.725 ************************************ 00:13:02.725 START TEST raid1_resize_superblock_test 00:13:02.725 ************************************ 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:13:02.725 Process raid pid: 60355 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60355 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60355' 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60355 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60355 ']' 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.725 13:07:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.725 [2024-12-06 13:07:09.050930] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:02.725 [2024-12-06 13:07:09.051152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.725 [2024-12-06 13:07:09.243580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.983 [2024-12-06 13:07:09.398894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.241 [2024-12-06 13:07:09.631773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.242 [2024-12-06 13:07:09.631839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.585 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.585 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:03.585 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:13:03.585 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.585 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.162 malloc0 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.162 [2024-12-06 13:07:10.599768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:04.162 [2024-12-06 13:07:10.600658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.162 [2024-12-06 13:07:10.600803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.162 [2024-12-06 13:07:10.600851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.162 [2024-12-06 13:07:10.604214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.162 [2024-12-06 13:07:10.604501] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:04.162 pt0 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.162 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.420 d5f65ed8-f2de-483c-9b6b-d8ad8ea62f15 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.420 2fad25d6-54a1-4994-a11e-b656e76f5cdb 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.420 10fa8d51-d12a-40a0-a940-71b3e48f44c2 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.420 [2024-12-06 13:07:10.795837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2fad25d6-54a1-4994-a11e-b656e76f5cdb is claimed 00:13:04.420 [2024-12-06 13:07:10.795953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 10fa8d51-d12a-40a0-a940-71b3e48f44c2 is claimed 00:13:04.420 [2024-12-06 13:07:10.796135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:04.420 [2024-12-06 13:07:10.796159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:04.420 [2024-12-06 13:07:10.796542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:04.420 [2024-12-06 13:07:10.796827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:04.420 [2024-12-06 13:07:10.796858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:04.420 [2024-12-06 13:07:10.797105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.420 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.421 [2024-12-06 13:07:10.924265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.421 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.679 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.679 [2024-12-06 13:07:10.984302] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:04.680 [2024-12-06 13:07:10.984341] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2fad25d6-54a1-4994-a11e-b656e76f5cdb' was resized: old size 131072, new size 204800 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:10.992142] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:04.680 [2024-12-06 13:07:10.992172] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '10fa8d51-d12a-40a0-a940-71b3e48f44c2' was resized: old size 131072, new size 204800 00:13:04.680 [2024-12-06 13:07:10.992534] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:11.120338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:11.164089] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:04.680 [2024-12-06 13:07:11.164199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:04.680 [2024-12-06 13:07:11.164239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:04.680 [2024-12-06 13:07:11.164469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.680 [2024-12-06 13:07:11.164811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.680 [2024-12-06 13:07:11.164911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.680 [2024-12-06 13:07:11.164935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:11.171960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:04.680 [2024-12-06 13:07:11.172041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.680 [2024-12-06 13:07:11.172075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:04.680 [2024-12-06 13:07:11.172097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.680 [2024-12-06 13:07:11.175327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.680 [2024-12-06 13:07:11.175525] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:04.680 pt0 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:11.177968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2fad25d6-54a1-4994-a11e-b656e76f5cdb 00:13:04.680 [2024-12-06 13:07:11.178057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2fad25d6-54a1-4994-a11e-b656e76f5cdb is claimed 00:13:04.680 [2024-12-06 13:07:11.178214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 10fa8d51-d12a-40a0-a940-71b3e48f44c2 00:13:04.680 [2024-12-06 13:07:11.178249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 10fa8d51-d12a-40a0-a940-71b3e48f44c2 is claimed 00:13:04.680 [2024-12-06 13:07:11.178423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 10fa8d51-d12a-40a0-a940-71b3e48f44c2 (2) smaller than existing raid bdev Raid (3) 00:13:04.680 [2024-12-06 13:07:11.178479] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2fad25d6-54a1-4994-a11e-b656e76f5cdb: File exists 00:13:04.680 [2024-12-06 13:07:11.178538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:04.680 [2024-12-06 13:07:11.178559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:04.680 [2024-12-06 13:07:11.178885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:04.680 [2024-12-06 13:07:11.179158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:04.680 [2024-12-06 13:07:11.179179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:04.680 [2024-12-06 13:07:11.179385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:13:04.680 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.680 [2024-12-06 13:07:11.192361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60355 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60355 ']' 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60355 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60355 00:13:04.939 killing process with pid 60355 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60355' 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60355 00:13:04.939 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60355 00:13:04.939 [2024-12-06 13:07:11.273151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:04.939 [2024-12-06 13:07:11.273279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.939 [2024-12-06 13:07:11.273397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.939 [2024-12-06 13:07:11.273419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:06.315 [2024-12-06 13:07:12.661854] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.688 ************************************ 00:13:07.688 END TEST raid1_resize_superblock_test 00:13:07.688 ************************************ 00:13:07.688 13:07:13 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:13:07.688 00:13:07.688 real 0m4.877s 00:13:07.688 user 0m5.054s 00:13:07.688 sys 0m0.777s 00:13:07.688 13:07:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.688 13:07:13 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:13:07.688 13:07:13 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:07.688 13:07:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.688 13:07:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.688 13:07:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.688 ************************************ 00:13:07.688 START TEST raid_function_test_raid0 00:13:07.688 ************************************ 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:07.688 Process raid pid: 60463 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60463 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:07.688 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60463' 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60463 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60463 ']' 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.689 13:07:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:07.689 [2024-12-06 13:07:13.990507] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:07.689 [2024-12-06 13:07:13.990978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.689 [2024-12-06 13:07:14.166997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.946 [2024-12-06 13:07:14.320701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.203 [2024-12-06 13:07:14.555242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.203 [2024-12-06 13:07:14.555308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.768 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.768 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:13:08.768 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:08.768 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.768 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:08.768 Base_1 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:08.768 Base_2 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:08.768 [2024-12-06 13:07:15.092524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:08.768 [2024-12-06 13:07:15.095372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:08.768 [2024-12-06 13:07:15.095692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:08.768 [2024-12-06 13:07:15.095723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:08.768 [2024-12-06 13:07:15.096087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:08.768 [2024-12-06 13:07:15.096313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:08.768 [2024-12-06 13:07:15.096328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:08.768 [2024-12-06 13:07:15.096590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:08.768 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:08.769 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:08.769 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:09.026 [2024-12-06 13:07:15.456921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:09.026 /dev/nbd0 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.026 1+0 records in 00:13:09.026 1+0 records out 00:13:09.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247563 s, 16.5 MB/s 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.026 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:09.591 { 00:13:09.591 "nbd_device": "/dev/nbd0", 00:13:09.591 "bdev_name": "raid" 00:13:09.591 } 00:13:09.591 ]' 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:09.591 { 00:13:09.591 "nbd_device": "/dev/nbd0", 00:13:09.591 "bdev_name": "raid" 00:13:09.591 } 00:13:09.591 ]' 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:09.591 4096+0 records in 00:13:09.591 4096+0 records out 00:13:09.591 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0337676 s, 62.1 MB/s 00:13:09.591 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:09.850 4096+0 records in 00:13:09.850 4096+0 records out 00:13:09.850 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.264319 s, 7.9 MB/s 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:09.850 128+0 records in 00:13:09.850 128+0 records out 00:13:09.850 65536 bytes (66 kB, 64 KiB) copied, 0.00120252 s, 54.5 MB/s 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:09.850 2035+0 records in 00:13:09.850 2035+0 records out 00:13:09.850 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012399 s, 84.0 MB/s 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:09.850 456+0 records in 00:13:09.850 456+0 records out 00:13:09.850 233472 bytes (233 kB, 228 KiB) copied, 0.00307349 s, 76.0 MB/s 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.850 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.417 [2024-12-06 13:07:16.691438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.417 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60463 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60463 ']' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60463 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60463 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.676 killing process with pid 60463 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60463' 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60463 00:13:10.676 [2024-12-06 13:07:17.114617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.676 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60463 00:13:10.676 [2024-12-06 13:07:17.114753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.676 [2024-12-06 13:07:17.114841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.676 [2024-12-06 13:07:17.114883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:10.934 [2024-12-06 13:07:17.312345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.311 13:07:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:13:12.311 00:13:12.311 real 0m4.535s 00:13:12.311 user 0m5.514s 00:13:12.311 sys 0m1.160s 00:13:12.311 ************************************ 00:13:12.311 END TEST raid_function_test_raid0 00:13:12.311 ************************************ 00:13:12.311 13:07:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.311 13:07:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:12.311 13:07:18 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:13:12.311 13:07:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.311 13:07:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.311 13:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.311 ************************************ 00:13:12.311 START TEST raid_function_test_concat 00:13:12.311 ************************************ 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60592 00:13:12.311 Process raid pid: 60592 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60592' 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60592 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60592 ']' 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.311 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:12.311 [2024-12-06 13:07:18.600761] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:12.311 [2024-12-06 13:07:18.601004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.311 [2024-12-06 13:07:18.794691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.581 [2024-12-06 13:07:18.956957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.840 [2024-12-06 13:07:19.180440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.840 [2024-12-06 13:07:19.180527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.099 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.099 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:13:13.099 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:13:13.099 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.099 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 Base_1 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 Base_2 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 [2024-12-06 13:07:19.725376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:13.357 [2024-12-06 13:07:19.728015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:13.357 [2024-12-06 13:07:19.728130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:13.357 [2024-12-06 13:07:19.728151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:13.357 [2024-12-06 13:07:19.728507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:13.357 [2024-12-06 13:07:19.728730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:13.357 [2024-12-06 13:07:19.728747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:13:13.357 [2024-12-06 13:07:19.728932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.357 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.358 13:07:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:13:13.616 [2024-12-06 13:07:20.133636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:13.875 /dev/nbd0 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.875 1+0 records in 00:13:13.875 1+0 records out 00:13:13.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432037 s, 9.5 MB/s 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.875 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:14.134 { 00:13:14.134 "nbd_device": "/dev/nbd0", 00:13:14.134 "bdev_name": "raid" 00:13:14.134 } 00:13:14.134 ]' 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:14.134 { 00:13:14.134 "nbd_device": "/dev/nbd0", 00:13:14.134 "bdev_name": "raid" 00:13:14.134 } 00:13:14.134 ]' 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:14.134 4096+0 records in 00:13:14.134 4096+0 records out 00:13:14.134 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0334594 s, 62.7 MB/s 00:13:14.134 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:14.700 4096+0 records in 00:13:14.700 4096+0 records out 00:13:14.700 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.335917 s, 6.2 MB/s 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:14.700 128+0 records in 00:13:14.700 128+0 records out 00:13:14.700 65536 bytes (66 kB, 64 KiB) copied, 0.000696486 s, 94.1 MB/s 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:13:14.700 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:14.700 2035+0 records in 00:13:14.700 2035+0 records out 00:13:14.700 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0087638 s, 119 MB/s 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:14.700 456+0 records in 00:13:14.700 456+0 records out 00:13:14.700 233472 bytes (233 kB, 228 KiB) copied, 0.00279988 s, 83.4 MB/s 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.700 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:14.959 [2024-12-06 13:07:21.353664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.959 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60592 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60592 ']' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60592 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.218 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60592 00:13:15.477 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.477 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.477 killing process with pid 60592 00:13:15.477 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60592' 00:13:15.477 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60592 00:13:15.477 [2024-12-06 13:07:21.767078] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.477 13:07:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60592 00:13:15.477 [2024-12-06 13:07:21.767218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.477 [2024-12-06 13:07:21.767300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.477 [2024-12-06 13:07:21.767319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:13:15.477 [2024-12-06 13:07:21.967407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.851 13:07:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:13:16.851 00:13:16.851 real 0m4.660s 00:13:16.851 user 0m5.680s 00:13:16.851 sys 0m1.153s 00:13:16.851 13:07:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.851 ************************************ 00:13:16.851 END TEST raid_function_test_concat 00:13:16.851 13:07:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:16.851 ************************************ 00:13:16.851 13:07:23 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:13:16.851 13:07:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.852 13:07:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.852 13:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.852 ************************************ 00:13:16.852 START TEST raid0_resize_test 00:13:16.852 ************************************ 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60732 00:13:16.852 Process raid pid: 60732 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60732' 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60732 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60732 ']' 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.852 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.852 [2024-12-06 13:07:23.335393] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:16.852 [2024-12-06 13:07:23.335623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.109 [2024-12-06 13:07:23.531207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.368 [2024-12-06 13:07:23.707147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.657 [2024-12-06 13:07:23.936263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.657 [2024-12-06 13:07:23.936334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 Base_1 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 Base_2 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 [2024-12-06 13:07:24.344269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:17.917 [2024-12-06 13:07:24.346929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:17.917 [2024-12-06 13:07:24.347009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:17.917 [2024-12-06 13:07:24.347029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:17.917 [2024-12-06 13:07:24.347353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:17.917 [2024-12-06 13:07:24.347539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:17.917 [2024-12-06 13:07:24.347563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:17.917 [2024-12-06 13:07:24.347733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 [2024-12-06 13:07:24.352248] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:17.917 [2024-12-06 13:07:24.352287] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:17.917 true 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 [2024-12-06 13:07:24.364506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 [2024-12-06 13:07:24.408287] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:17.917 [2024-12-06 13:07:24.408326] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:17.917 [2024-12-06 13:07:24.408371] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:17.917 true 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.917 [2024-12-06 13:07:24.420544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.917 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60732 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60732 ']' 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60732 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60732 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.176 killing process with pid 60732 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60732' 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60732 00:13:18.176 [2024-12-06 13:07:24.498254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.176 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60732 00:13:18.176 [2024-12-06 13:07:24.498389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.176 [2024-12-06 13:07:24.498484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.176 [2024-12-06 13:07:24.498507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:18.176 [2024-12-06 13:07:24.514751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.558 13:07:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:19.558 ************************************ 00:13:19.558 END TEST raid0_resize_test 00:13:19.558 ************************************ 00:13:19.558 00:13:19.558 real 0m2.465s 00:13:19.558 user 0m2.692s 00:13:19.558 sys 0m0.440s 00:13:19.558 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.558 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.558 13:07:25 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:13:19.558 13:07:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.558 13:07:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.558 13:07:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.558 ************************************ 00:13:19.558 START TEST raid1_resize_test 00:13:19.558 ************************************ 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60788 00:13:19.558 Process raid pid: 60788 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60788' 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60788 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60788 ']' 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.558 13:07:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.558 [2024-12-06 13:07:25.816202] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:19.558 [2024-12-06 13:07:25.816407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.558 [2024-12-06 13:07:25.997500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.816 [2024-12-06 13:07:26.149730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.075 [2024-12-06 13:07:26.381456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.075 [2024-12-06 13:07:26.381532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.333 Base_1 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.333 Base_2 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.333 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 [2024-12-06 13:07:26.863314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:20.593 [2024-12-06 13:07:26.865991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:20.593 [2024-12-06 13:07:26.866072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.593 [2024-12-06 13:07:26.866119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.593 [2024-12-06 13:07:26.866469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:20.593 [2024-12-06 13:07:26.866693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.593 [2024-12-06 13:07:26.866716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:20.593 [2024-12-06 13:07:26.866906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 [2024-12-06 13:07:26.871279] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:20.593 [2024-12-06 13:07:26.871318] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:20.593 true 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 [2024-12-06 13:07:26.883512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 [2024-12-06 13:07:26.927302] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:20.593 [2024-12-06 13:07:26.927337] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:20.593 [2024-12-06 13:07:26.927398] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:13:20.593 true 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.593 [2024-12-06 13:07:26.939515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60788 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60788 ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60788 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.593 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60788 00:13:20.593 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.593 killing process with pid 60788 00:13:20.593 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.593 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60788' 00:13:20.593 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60788 00:13:20.593 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60788 00:13:20.593 [2024-12-06 13:07:27.014877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.593 [2024-12-06 13:07:27.015006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.593 [2024-12-06 13:07:27.015720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.593 [2024-12-06 13:07:27.015753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:20.593 [2024-12-06 13:07:27.031808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.036 13:07:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:13:22.036 00:13:22.036 real 0m2.485s 00:13:22.036 user 0m2.704s 00:13:22.036 sys 0m0.448s 00:13:22.036 13:07:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.036 13:07:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.036 ************************************ 00:13:22.036 END TEST raid1_resize_test 00:13:22.036 ************************************ 00:13:22.036 13:07:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:22.036 13:07:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:22.036 13:07:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:22.036 13:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:22.036 13:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.036 13:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.036 ************************************ 00:13:22.036 START TEST raid_state_function_test 00:13:22.036 ************************************ 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60855 00:13:22.036 Process raid pid: 60855 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60855' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60855 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60855 ']' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.036 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.036 [2024-12-06 13:07:28.383036] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:22.036 [2024-12-06 13:07:28.383243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.295 [2024-12-06 13:07:28.587411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.295 [2024-12-06 13:07:28.761076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.554 [2024-12-06 13:07:28.997596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.554 [2024-12-06 13:07:28.997689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 [2024-12-06 13:07:29.384422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.122 [2024-12-06 13:07:29.384569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.122 [2024-12-06 13:07:29.384590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.122 [2024-12-06 13:07:29.384608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.122 "name": "Existed_Raid", 00:13:23.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.122 "strip_size_kb": 64, 00:13:23.122 "state": "configuring", 00:13:23.122 "raid_level": "raid0", 00:13:23.122 "superblock": false, 00:13:23.123 "num_base_bdevs": 2, 00:13:23.123 "num_base_bdevs_discovered": 0, 00:13:23.123 "num_base_bdevs_operational": 2, 00:13:23.123 "base_bdevs_list": [ 00:13:23.123 { 00:13:23.123 "name": "BaseBdev1", 00:13:23.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.123 "is_configured": false, 00:13:23.123 "data_offset": 0, 00:13:23.123 "data_size": 0 00:13:23.123 }, 00:13:23.123 { 00:13:23.123 "name": "BaseBdev2", 00:13:23.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.123 "is_configured": false, 00:13:23.123 "data_offset": 0, 00:13:23.123 "data_size": 0 00:13:23.123 } 00:13:23.123 ] 00:13:23.123 }' 00:13:23.123 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.123 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 [2024-12-06 13:07:29.936560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.691 [2024-12-06 13:07:29.936621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 [2024-12-06 13:07:29.944495] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.691 [2024-12-06 13:07:29.944550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.691 [2024-12-06 13:07:29.944567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.691 [2024-12-06 13:07:29.944588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 [2024-12-06 13:07:29.995394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.691 BaseBdev1 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.691 13:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.691 [ 00:13:23.691 { 00:13:23.691 "name": "BaseBdev1", 00:13:23.691 "aliases": [ 00:13:23.691 "6432ed53-303c-443c-8a1a-b233e7c0ae16" 00:13:23.691 ], 00:13:23.691 "product_name": "Malloc disk", 00:13:23.691 "block_size": 512, 00:13:23.691 "num_blocks": 65536, 00:13:23.691 "uuid": "6432ed53-303c-443c-8a1a-b233e7c0ae16", 00:13:23.691 "assigned_rate_limits": { 00:13:23.691 "rw_ios_per_sec": 0, 00:13:23.691 "rw_mbytes_per_sec": 0, 00:13:23.691 "r_mbytes_per_sec": 0, 00:13:23.691 "w_mbytes_per_sec": 0 00:13:23.691 }, 00:13:23.691 "claimed": true, 00:13:23.691 "claim_type": "exclusive_write", 00:13:23.691 "zoned": false, 00:13:23.691 "supported_io_types": { 00:13:23.691 "read": true, 00:13:23.691 "write": true, 00:13:23.691 "unmap": true, 00:13:23.691 "flush": true, 00:13:23.691 "reset": true, 00:13:23.691 "nvme_admin": false, 00:13:23.691 "nvme_io": false, 00:13:23.691 "nvme_io_md": false, 00:13:23.691 "write_zeroes": true, 00:13:23.691 "zcopy": true, 00:13:23.691 "get_zone_info": false, 00:13:23.691 "zone_management": false, 00:13:23.691 "zone_append": false, 00:13:23.691 "compare": false, 00:13:23.691 "compare_and_write": false, 00:13:23.691 "abort": true, 00:13:23.691 "seek_hole": false, 00:13:23.691 "seek_data": false, 00:13:23.691 "copy": true, 00:13:23.691 "nvme_iov_md": false 00:13:23.691 }, 00:13:23.691 "memory_domains": [ 00:13:23.691 { 00:13:23.691 "dma_device_id": "system", 00:13:23.691 "dma_device_type": 1 00:13:23.691 }, 00:13:23.691 { 00:13:23.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.691 "dma_device_type": 2 00:13:23.691 } 00:13:23.691 ], 00:13:23.691 "driver_specific": {} 00:13:23.691 } 00:13:23.691 ] 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.691 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.692 "name": "Existed_Raid", 00:13:23.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.692 "strip_size_kb": 64, 00:13:23.692 "state": "configuring", 00:13:23.692 "raid_level": "raid0", 00:13:23.692 "superblock": false, 00:13:23.692 "num_base_bdevs": 2, 00:13:23.692 "num_base_bdevs_discovered": 1, 00:13:23.692 "num_base_bdevs_operational": 2, 00:13:23.692 "base_bdevs_list": [ 00:13:23.692 { 00:13:23.692 "name": "BaseBdev1", 00:13:23.692 "uuid": "6432ed53-303c-443c-8a1a-b233e7c0ae16", 00:13:23.692 "is_configured": true, 00:13:23.692 "data_offset": 0, 00:13:23.692 "data_size": 65536 00:13:23.692 }, 00:13:23.692 { 00:13:23.692 "name": "BaseBdev2", 00:13:23.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.692 "is_configured": false, 00:13:23.692 "data_offset": 0, 00:13:23.692 "data_size": 0 00:13:23.692 } 00:13:23.692 ] 00:13:23.692 }' 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.692 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.259 [2024-12-06 13:07:30.563654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:24.259 [2024-12-06 13:07:30.563734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.259 [2024-12-06 13:07:30.571655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.259 [2024-12-06 13:07:30.574506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:24.259 [2024-12-06 13:07:30.574566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.259 "name": "Existed_Raid", 00:13:24.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.259 "strip_size_kb": 64, 00:13:24.259 "state": "configuring", 00:13:24.259 "raid_level": "raid0", 00:13:24.259 "superblock": false, 00:13:24.259 "num_base_bdevs": 2, 00:13:24.259 "num_base_bdevs_discovered": 1, 00:13:24.259 "num_base_bdevs_operational": 2, 00:13:24.259 "base_bdevs_list": [ 00:13:24.259 { 00:13:24.259 "name": "BaseBdev1", 00:13:24.259 "uuid": "6432ed53-303c-443c-8a1a-b233e7c0ae16", 00:13:24.259 "is_configured": true, 00:13:24.259 "data_offset": 0, 00:13:24.259 "data_size": 65536 00:13:24.259 }, 00:13:24.259 { 00:13:24.259 "name": "BaseBdev2", 00:13:24.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.259 "is_configured": false, 00:13:24.259 "data_offset": 0, 00:13:24.259 "data_size": 0 00:13:24.259 } 00:13:24.259 ] 00:13:24.259 }' 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.259 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.826 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.827 [2024-12-06 13:07:31.175981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.827 [2024-12-06 13:07:31.176076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:24.827 [2024-12-06 13:07:31.176093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:24.827 [2024-12-06 13:07:31.176492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:24.827 [2024-12-06 13:07:31.176752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:24.827 [2024-12-06 13:07:31.176786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:24.827 [2024-12-06 13:07:31.177124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.827 BaseBdev2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.827 [ 00:13:24.827 { 00:13:24.827 "name": "BaseBdev2", 00:13:24.827 "aliases": [ 00:13:24.827 "d93c29b9-e65d-4269-b0a1-2affd2cf0a5c" 00:13:24.827 ], 00:13:24.827 "product_name": "Malloc disk", 00:13:24.827 "block_size": 512, 00:13:24.827 "num_blocks": 65536, 00:13:24.827 "uuid": "d93c29b9-e65d-4269-b0a1-2affd2cf0a5c", 00:13:24.827 "assigned_rate_limits": { 00:13:24.827 "rw_ios_per_sec": 0, 00:13:24.827 "rw_mbytes_per_sec": 0, 00:13:24.827 "r_mbytes_per_sec": 0, 00:13:24.827 "w_mbytes_per_sec": 0 00:13:24.827 }, 00:13:24.827 "claimed": true, 00:13:24.827 "claim_type": "exclusive_write", 00:13:24.827 "zoned": false, 00:13:24.827 "supported_io_types": { 00:13:24.827 "read": true, 00:13:24.827 "write": true, 00:13:24.827 "unmap": true, 00:13:24.827 "flush": true, 00:13:24.827 "reset": true, 00:13:24.827 "nvme_admin": false, 00:13:24.827 "nvme_io": false, 00:13:24.827 "nvme_io_md": false, 00:13:24.827 "write_zeroes": true, 00:13:24.827 "zcopy": true, 00:13:24.827 "get_zone_info": false, 00:13:24.827 "zone_management": false, 00:13:24.827 "zone_append": false, 00:13:24.827 "compare": false, 00:13:24.827 "compare_and_write": false, 00:13:24.827 "abort": true, 00:13:24.827 "seek_hole": false, 00:13:24.827 "seek_data": false, 00:13:24.827 "copy": true, 00:13:24.827 "nvme_iov_md": false 00:13:24.827 }, 00:13:24.827 "memory_domains": [ 00:13:24.827 { 00:13:24.827 "dma_device_id": "system", 00:13:24.827 "dma_device_type": 1 00:13:24.827 }, 00:13:24.827 { 00:13:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.827 "dma_device_type": 2 00:13:24.827 } 00:13:24.827 ], 00:13:24.827 "driver_specific": {} 00:13:24.827 } 00:13:24.827 ] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.827 "name": "Existed_Raid", 00:13:24.827 "uuid": "0c736d25-90ec-4d54-acea-6e1b33e4f4a5", 00:13:24.827 "strip_size_kb": 64, 00:13:24.827 "state": "online", 00:13:24.827 "raid_level": "raid0", 00:13:24.827 "superblock": false, 00:13:24.827 "num_base_bdevs": 2, 00:13:24.827 "num_base_bdevs_discovered": 2, 00:13:24.827 "num_base_bdevs_operational": 2, 00:13:24.827 "base_bdevs_list": [ 00:13:24.827 { 00:13:24.827 "name": "BaseBdev1", 00:13:24.827 "uuid": "6432ed53-303c-443c-8a1a-b233e7c0ae16", 00:13:24.827 "is_configured": true, 00:13:24.827 "data_offset": 0, 00:13:24.827 "data_size": 65536 00:13:24.827 }, 00:13:24.827 { 00:13:24.827 "name": "BaseBdev2", 00:13:24.827 "uuid": "d93c29b9-e65d-4269-b0a1-2affd2cf0a5c", 00:13:24.827 "is_configured": true, 00:13:24.827 "data_offset": 0, 00:13:24.827 "data_size": 65536 00:13:24.827 } 00:13:24.827 ] 00:13:24.827 }' 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.827 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.426 [2024-12-06 13:07:31.732588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.426 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.427 "name": "Existed_Raid", 00:13:25.427 "aliases": [ 00:13:25.427 "0c736d25-90ec-4d54-acea-6e1b33e4f4a5" 00:13:25.427 ], 00:13:25.427 "product_name": "Raid Volume", 00:13:25.427 "block_size": 512, 00:13:25.427 "num_blocks": 131072, 00:13:25.427 "uuid": "0c736d25-90ec-4d54-acea-6e1b33e4f4a5", 00:13:25.427 "assigned_rate_limits": { 00:13:25.427 "rw_ios_per_sec": 0, 00:13:25.427 "rw_mbytes_per_sec": 0, 00:13:25.427 "r_mbytes_per_sec": 0, 00:13:25.427 "w_mbytes_per_sec": 0 00:13:25.427 }, 00:13:25.427 "claimed": false, 00:13:25.427 "zoned": false, 00:13:25.427 "supported_io_types": { 00:13:25.427 "read": true, 00:13:25.427 "write": true, 00:13:25.427 "unmap": true, 00:13:25.427 "flush": true, 00:13:25.427 "reset": true, 00:13:25.427 "nvme_admin": false, 00:13:25.427 "nvme_io": false, 00:13:25.427 "nvme_io_md": false, 00:13:25.427 "write_zeroes": true, 00:13:25.427 "zcopy": false, 00:13:25.427 "get_zone_info": false, 00:13:25.427 "zone_management": false, 00:13:25.427 "zone_append": false, 00:13:25.427 "compare": false, 00:13:25.427 "compare_and_write": false, 00:13:25.427 "abort": false, 00:13:25.427 "seek_hole": false, 00:13:25.427 "seek_data": false, 00:13:25.427 "copy": false, 00:13:25.427 "nvme_iov_md": false 00:13:25.427 }, 00:13:25.427 "memory_domains": [ 00:13:25.427 { 00:13:25.427 "dma_device_id": "system", 00:13:25.427 "dma_device_type": 1 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.427 "dma_device_type": 2 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "dma_device_id": "system", 00:13:25.427 "dma_device_type": 1 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.427 "dma_device_type": 2 00:13:25.427 } 00:13:25.427 ], 00:13:25.427 "driver_specific": { 00:13:25.427 "raid": { 00:13:25.427 "uuid": "0c736d25-90ec-4d54-acea-6e1b33e4f4a5", 00:13:25.427 "strip_size_kb": 64, 00:13:25.427 "state": "online", 00:13:25.427 "raid_level": "raid0", 00:13:25.427 "superblock": false, 00:13:25.427 "num_base_bdevs": 2, 00:13:25.427 "num_base_bdevs_discovered": 2, 00:13:25.427 "num_base_bdevs_operational": 2, 00:13:25.427 "base_bdevs_list": [ 00:13:25.427 { 00:13:25.427 "name": "BaseBdev1", 00:13:25.427 "uuid": "6432ed53-303c-443c-8a1a-b233e7c0ae16", 00:13:25.427 "is_configured": true, 00:13:25.427 "data_offset": 0, 00:13:25.427 "data_size": 65536 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "name": "BaseBdev2", 00:13:25.427 "uuid": "d93c29b9-e65d-4269-b0a1-2affd2cf0a5c", 00:13:25.427 "is_configured": true, 00:13:25.427 "data_offset": 0, 00:13:25.427 "data_size": 65536 00:13:25.427 } 00:13:25.427 ] 00:13:25.427 } 00:13:25.427 } 00:13:25.427 }' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:25.427 BaseBdev2' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.427 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.685 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.685 [2024-12-06 13:07:31.992287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.685 [2024-12-06 13:07:31.992339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.685 [2024-12-06 13:07:31.992415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.685 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.685 "name": "Existed_Raid", 00:13:25.685 "uuid": "0c736d25-90ec-4d54-acea-6e1b33e4f4a5", 00:13:25.685 "strip_size_kb": 64, 00:13:25.685 "state": "offline", 00:13:25.685 "raid_level": "raid0", 00:13:25.685 "superblock": false, 00:13:25.685 "num_base_bdevs": 2, 00:13:25.685 "num_base_bdevs_discovered": 1, 00:13:25.685 "num_base_bdevs_operational": 1, 00:13:25.685 "base_bdevs_list": [ 00:13:25.685 { 00:13:25.685 "name": null, 00:13:25.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.685 "is_configured": false, 00:13:25.685 "data_offset": 0, 00:13:25.685 "data_size": 65536 00:13:25.685 }, 00:13:25.685 { 00:13:25.686 "name": "BaseBdev2", 00:13:25.686 "uuid": "d93c29b9-e65d-4269-b0a1-2affd2cf0a5c", 00:13:25.686 "is_configured": true, 00:13:25.686 "data_offset": 0, 00:13:25.686 "data_size": 65536 00:13:25.686 } 00:13:25.686 ] 00:13:25.686 }' 00:13:25.686 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.686 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.253 [2024-12-06 13:07:32.651876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.253 [2024-12-06 13:07:32.651964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.253 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60855 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60855 ']' 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60855 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60855 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.512 killing process with pid 60855 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60855' 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60855 00:13:26.512 [2024-12-06 13:07:32.840475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.512 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60855 00:13:26.512 [2024-12-06 13:07:32.855877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:27.906 00:13:27.906 real 0m5.769s 00:13:27.906 user 0m8.622s 00:13:27.906 sys 0m0.882s 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.906 ************************************ 00:13:27.906 END TEST raid_state_function_test 00:13:27.906 ************************************ 00:13:27.906 13:07:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:27.906 13:07:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:27.906 13:07:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.906 13:07:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.906 ************************************ 00:13:27.906 START TEST raid_state_function_test_sb 00:13:27.906 ************************************ 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61115 00:13:27.906 Process raid pid: 61115 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61115' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61115 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61115 ']' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.906 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.906 [2024-12-06 13:07:34.198886] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:27.906 [2024-12-06 13:07:34.199101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.906 [2024-12-06 13:07:34.388634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.165 [2024-12-06 13:07:34.539173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.423 [2024-12-06 13:07:34.776175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.423 [2024-12-06 13:07:34.776249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.681 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.681 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:28.681 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:28.681 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.681 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.681 [2024-12-06 13:07:35.207437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.681 [2024-12-06 13:07:35.207532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.681 [2024-12-06 13:07:35.207551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.681 [2024-12-06 13:07:35.207569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.939 "name": "Existed_Raid", 00:13:28.939 "uuid": "282cdfb2-f830-4921-93bf-a09fab5767f4", 00:13:28.939 "strip_size_kb": 64, 00:13:28.939 "state": "configuring", 00:13:28.939 "raid_level": "raid0", 00:13:28.939 "superblock": true, 00:13:28.939 "num_base_bdevs": 2, 00:13:28.939 "num_base_bdevs_discovered": 0, 00:13:28.939 "num_base_bdevs_operational": 2, 00:13:28.939 "base_bdevs_list": [ 00:13:28.939 { 00:13:28.939 "name": "BaseBdev1", 00:13:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.939 "is_configured": false, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 0 00:13:28.939 }, 00:13:28.939 { 00:13:28.939 "name": "BaseBdev2", 00:13:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.939 "is_configured": false, 00:13:28.939 "data_offset": 0, 00:13:28.939 "data_size": 0 00:13:28.939 } 00:13:28.939 ] 00:13:28.939 }' 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.939 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.256 [2024-12-06 13:07:35.695582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.256 [2024-12-06 13:07:35.695648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.256 [2024-12-06 13:07:35.703498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.256 [2024-12-06 13:07:35.703558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.256 [2024-12-06 13:07:35.703577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.256 [2024-12-06 13:07:35.703600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.256 [2024-12-06 13:07:35.759962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.256 BaseBdev1 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.256 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.257 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.257 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.257 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.257 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.530 [ 00:13:29.530 { 00:13:29.530 "name": "BaseBdev1", 00:13:29.530 "aliases": [ 00:13:29.530 "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3" 00:13:29.530 ], 00:13:29.530 "product_name": "Malloc disk", 00:13:29.530 "block_size": 512, 00:13:29.530 "num_blocks": 65536, 00:13:29.530 "uuid": "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3", 00:13:29.530 "assigned_rate_limits": { 00:13:29.530 "rw_ios_per_sec": 0, 00:13:29.530 "rw_mbytes_per_sec": 0, 00:13:29.530 "r_mbytes_per_sec": 0, 00:13:29.530 "w_mbytes_per_sec": 0 00:13:29.530 }, 00:13:29.530 "claimed": true, 00:13:29.530 "claim_type": "exclusive_write", 00:13:29.530 "zoned": false, 00:13:29.530 "supported_io_types": { 00:13:29.530 "read": true, 00:13:29.530 "write": true, 00:13:29.530 "unmap": true, 00:13:29.530 "flush": true, 00:13:29.530 "reset": true, 00:13:29.530 "nvme_admin": false, 00:13:29.530 "nvme_io": false, 00:13:29.530 "nvme_io_md": false, 00:13:29.530 "write_zeroes": true, 00:13:29.530 "zcopy": true, 00:13:29.530 "get_zone_info": false, 00:13:29.530 "zone_management": false, 00:13:29.530 "zone_append": false, 00:13:29.530 "compare": false, 00:13:29.530 "compare_and_write": false, 00:13:29.530 "abort": true, 00:13:29.530 "seek_hole": false, 00:13:29.530 "seek_data": false, 00:13:29.530 "copy": true, 00:13:29.530 "nvme_iov_md": false 00:13:29.530 }, 00:13:29.530 "memory_domains": [ 00:13:29.530 { 00:13:29.530 "dma_device_id": "system", 00:13:29.530 "dma_device_type": 1 00:13:29.530 }, 00:13:29.530 { 00:13:29.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.530 "dma_device_type": 2 00:13:29.530 } 00:13:29.530 ], 00:13:29.530 "driver_specific": {} 00:13:29.530 } 00:13:29.530 ] 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.530 "name": "Existed_Raid", 00:13:29.530 "uuid": "2aacc48f-6a25-43b3-8e32-4613fc806b9b", 00:13:29.530 "strip_size_kb": 64, 00:13:29.530 "state": "configuring", 00:13:29.530 "raid_level": "raid0", 00:13:29.530 "superblock": true, 00:13:29.530 "num_base_bdevs": 2, 00:13:29.530 "num_base_bdevs_discovered": 1, 00:13:29.530 "num_base_bdevs_operational": 2, 00:13:29.530 "base_bdevs_list": [ 00:13:29.530 { 00:13:29.530 "name": "BaseBdev1", 00:13:29.530 "uuid": "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3", 00:13:29.530 "is_configured": true, 00:13:29.530 "data_offset": 2048, 00:13:29.530 "data_size": 63488 00:13:29.530 }, 00:13:29.530 { 00:13:29.530 "name": "BaseBdev2", 00:13:29.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.530 "is_configured": false, 00:13:29.530 "data_offset": 0, 00:13:29.530 "data_size": 0 00:13:29.530 } 00:13:29.530 ] 00:13:29.530 }' 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.530 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.789 [2024-12-06 13:07:36.296238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.789 [2024-12-06 13:07:36.296315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.789 [2024-12-06 13:07:36.304257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.789 [2024-12-06 13:07:36.306909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.789 [2024-12-06 13:07:36.306970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.789 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.790 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.790 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.048 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.048 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.048 "name": "Existed_Raid", 00:13:30.048 "uuid": "bfb7d89c-e57a-4173-9e74-551278e62120", 00:13:30.048 "strip_size_kb": 64, 00:13:30.048 "state": "configuring", 00:13:30.048 "raid_level": "raid0", 00:13:30.048 "superblock": true, 00:13:30.048 "num_base_bdevs": 2, 00:13:30.048 "num_base_bdevs_discovered": 1, 00:13:30.048 "num_base_bdevs_operational": 2, 00:13:30.048 "base_bdevs_list": [ 00:13:30.048 { 00:13:30.048 "name": "BaseBdev1", 00:13:30.048 "uuid": "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3", 00:13:30.048 "is_configured": true, 00:13:30.048 "data_offset": 2048, 00:13:30.048 "data_size": 63488 00:13:30.048 }, 00:13:30.048 { 00:13:30.048 "name": "BaseBdev2", 00:13:30.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.048 "is_configured": false, 00:13:30.048 "data_offset": 0, 00:13:30.048 "data_size": 0 00:13:30.048 } 00:13:30.048 ] 00:13:30.048 }' 00:13:30.048 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.048 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.306 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.306 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.306 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.565 [2024-12-06 13:07:36.874614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.565 [2024-12-06 13:07:36.874962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.565 [2024-12-06 13:07:36.874997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.565 [2024-12-06 13:07:36.875343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:30.565 [2024-12-06 13:07:36.875584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.565 [2024-12-06 13:07:36.875609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:30.565 BaseBdev2 00:13:30.565 [2024-12-06 13:07:36.875788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.565 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.565 [ 00:13:30.565 { 00:13:30.565 "name": "BaseBdev2", 00:13:30.565 "aliases": [ 00:13:30.565 "1cecb045-61e9-4ef6-9d76-8e33f62f2934" 00:13:30.565 ], 00:13:30.565 "product_name": "Malloc disk", 00:13:30.565 "block_size": 512, 00:13:30.565 "num_blocks": 65536, 00:13:30.565 "uuid": "1cecb045-61e9-4ef6-9d76-8e33f62f2934", 00:13:30.565 "assigned_rate_limits": { 00:13:30.565 "rw_ios_per_sec": 0, 00:13:30.565 "rw_mbytes_per_sec": 0, 00:13:30.565 "r_mbytes_per_sec": 0, 00:13:30.565 "w_mbytes_per_sec": 0 00:13:30.565 }, 00:13:30.565 "claimed": true, 00:13:30.565 "claim_type": "exclusive_write", 00:13:30.565 "zoned": false, 00:13:30.565 "supported_io_types": { 00:13:30.565 "read": true, 00:13:30.565 "write": true, 00:13:30.565 "unmap": true, 00:13:30.565 "flush": true, 00:13:30.565 "reset": true, 00:13:30.565 "nvme_admin": false, 00:13:30.565 "nvme_io": false, 00:13:30.565 "nvme_io_md": false, 00:13:30.565 "write_zeroes": true, 00:13:30.565 "zcopy": true, 00:13:30.565 "get_zone_info": false, 00:13:30.565 "zone_management": false, 00:13:30.566 "zone_append": false, 00:13:30.566 "compare": false, 00:13:30.566 "compare_and_write": false, 00:13:30.566 "abort": true, 00:13:30.566 "seek_hole": false, 00:13:30.566 "seek_data": false, 00:13:30.566 "copy": true, 00:13:30.566 "nvme_iov_md": false 00:13:30.566 }, 00:13:30.566 "memory_domains": [ 00:13:30.566 { 00:13:30.566 "dma_device_id": "system", 00:13:30.566 "dma_device_type": 1 00:13:30.566 }, 00:13:30.566 { 00:13:30.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.566 "dma_device_type": 2 00:13:30.566 } 00:13:30.566 ], 00:13:30.566 "driver_specific": {} 00:13:30.566 } 00:13:30.566 ] 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.566 "name": "Existed_Raid", 00:13:30.566 "uuid": "bfb7d89c-e57a-4173-9e74-551278e62120", 00:13:30.566 "strip_size_kb": 64, 00:13:30.566 "state": "online", 00:13:30.566 "raid_level": "raid0", 00:13:30.566 "superblock": true, 00:13:30.566 "num_base_bdevs": 2, 00:13:30.566 "num_base_bdevs_discovered": 2, 00:13:30.566 "num_base_bdevs_operational": 2, 00:13:30.566 "base_bdevs_list": [ 00:13:30.566 { 00:13:30.566 "name": "BaseBdev1", 00:13:30.566 "uuid": "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3", 00:13:30.566 "is_configured": true, 00:13:30.566 "data_offset": 2048, 00:13:30.566 "data_size": 63488 00:13:30.566 }, 00:13:30.566 { 00:13:30.566 "name": "BaseBdev2", 00:13:30.566 "uuid": "1cecb045-61e9-4ef6-9d76-8e33f62f2934", 00:13:30.566 "is_configured": true, 00:13:30.566 "data_offset": 2048, 00:13:30.566 "data_size": 63488 00:13:30.566 } 00:13:30.566 ] 00:13:30.566 }' 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.566 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.132 [2024-12-06 13:07:37.439306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.132 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.132 "name": "Existed_Raid", 00:13:31.132 "aliases": [ 00:13:31.132 "bfb7d89c-e57a-4173-9e74-551278e62120" 00:13:31.132 ], 00:13:31.132 "product_name": "Raid Volume", 00:13:31.132 "block_size": 512, 00:13:31.132 "num_blocks": 126976, 00:13:31.132 "uuid": "bfb7d89c-e57a-4173-9e74-551278e62120", 00:13:31.132 "assigned_rate_limits": { 00:13:31.132 "rw_ios_per_sec": 0, 00:13:31.132 "rw_mbytes_per_sec": 0, 00:13:31.132 "r_mbytes_per_sec": 0, 00:13:31.132 "w_mbytes_per_sec": 0 00:13:31.132 }, 00:13:31.132 "claimed": false, 00:13:31.132 "zoned": false, 00:13:31.132 "supported_io_types": { 00:13:31.132 "read": true, 00:13:31.133 "write": true, 00:13:31.133 "unmap": true, 00:13:31.133 "flush": true, 00:13:31.133 "reset": true, 00:13:31.133 "nvme_admin": false, 00:13:31.133 "nvme_io": false, 00:13:31.133 "nvme_io_md": false, 00:13:31.133 "write_zeroes": true, 00:13:31.133 "zcopy": false, 00:13:31.133 "get_zone_info": false, 00:13:31.133 "zone_management": false, 00:13:31.133 "zone_append": false, 00:13:31.133 "compare": false, 00:13:31.133 "compare_and_write": false, 00:13:31.133 "abort": false, 00:13:31.133 "seek_hole": false, 00:13:31.133 "seek_data": false, 00:13:31.133 "copy": false, 00:13:31.133 "nvme_iov_md": false 00:13:31.133 }, 00:13:31.133 "memory_domains": [ 00:13:31.133 { 00:13:31.133 "dma_device_id": "system", 00:13:31.133 "dma_device_type": 1 00:13:31.133 }, 00:13:31.133 { 00:13:31.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.133 "dma_device_type": 2 00:13:31.133 }, 00:13:31.133 { 00:13:31.133 "dma_device_id": "system", 00:13:31.133 "dma_device_type": 1 00:13:31.133 }, 00:13:31.133 { 00:13:31.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.133 "dma_device_type": 2 00:13:31.133 } 00:13:31.133 ], 00:13:31.133 "driver_specific": { 00:13:31.133 "raid": { 00:13:31.133 "uuid": "bfb7d89c-e57a-4173-9e74-551278e62120", 00:13:31.133 "strip_size_kb": 64, 00:13:31.133 "state": "online", 00:13:31.133 "raid_level": "raid0", 00:13:31.133 "superblock": true, 00:13:31.133 "num_base_bdevs": 2, 00:13:31.133 "num_base_bdevs_discovered": 2, 00:13:31.133 "num_base_bdevs_operational": 2, 00:13:31.133 "base_bdevs_list": [ 00:13:31.133 { 00:13:31.133 "name": "BaseBdev1", 00:13:31.133 "uuid": "a95018e5-da3a-4b1e-b9fd-1f96a27ef7c3", 00:13:31.133 "is_configured": true, 00:13:31.133 "data_offset": 2048, 00:13:31.133 "data_size": 63488 00:13:31.133 }, 00:13:31.133 { 00:13:31.133 "name": "BaseBdev2", 00:13:31.133 "uuid": "1cecb045-61e9-4ef6-9d76-8e33f62f2934", 00:13:31.133 "is_configured": true, 00:13:31.133 "data_offset": 2048, 00:13:31.133 "data_size": 63488 00:13:31.133 } 00:13:31.133 ] 00:13:31.133 } 00:13:31.133 } 00:13:31.133 }' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:31.133 BaseBdev2' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.133 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 [2024-12-06 13:07:37.762974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.391 [2024-12-06 13:07:37.763023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.391 [2024-12-06 13:07:37.763100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.391 "name": "Existed_Raid", 00:13:31.391 "uuid": "bfb7d89c-e57a-4173-9e74-551278e62120", 00:13:31.391 "strip_size_kb": 64, 00:13:31.391 "state": "offline", 00:13:31.391 "raid_level": "raid0", 00:13:31.391 "superblock": true, 00:13:31.391 "num_base_bdevs": 2, 00:13:31.391 "num_base_bdevs_discovered": 1, 00:13:31.391 "num_base_bdevs_operational": 1, 00:13:31.391 "base_bdevs_list": [ 00:13:31.391 { 00:13:31.391 "name": null, 00:13:31.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.391 "is_configured": false, 00:13:31.391 "data_offset": 0, 00:13:31.391 "data_size": 63488 00:13:31.391 }, 00:13:31.391 { 00:13:31.391 "name": "BaseBdev2", 00:13:31.391 "uuid": "1cecb045-61e9-4ef6-9d76-8e33f62f2934", 00:13:31.391 "is_configured": true, 00:13:31.391 "data_offset": 2048, 00:13:31.391 "data_size": 63488 00:13:31.391 } 00:13:31.391 ] 00:13:31.391 }' 00:13:31.648 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.648 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.906 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.165 [2024-12-06 13:07:38.447089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.165 [2024-12-06 13:07:38.447167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61115 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61115 ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61115 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61115 00:13:32.165 killing process with pid 61115 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61115' 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61115 00:13:32.165 [2024-12-06 13:07:38.627704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.165 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61115 00:13:32.165 [2024-12-06 13:07:38.643701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.540 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:33.540 00:13:33.540 real 0m5.721s 00:13:33.540 user 0m8.499s 00:13:33.540 sys 0m0.902s 00:13:33.540 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.540 ************************************ 00:13:33.540 END TEST raid_state_function_test_sb 00:13:33.540 ************************************ 00:13:33.540 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.540 13:07:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:33.540 13:07:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:33.540 13:07:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.540 13:07:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.540 ************************************ 00:13:33.540 START TEST raid_superblock_test 00:13:33.540 ************************************ 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61367 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61367 00:13:33.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61367 ']' 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.540 13:07:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.540 [2024-12-06 13:07:39.973643] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:33.540 [2024-12-06 13:07:39.974014] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:13:33.799 [2024-12-06 13:07:40.161475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.057 [2024-12-06 13:07:40.331912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.317 [2024-12-06 13:07:40.590721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.317 [2024-12-06 13:07:40.591075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.576 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 malloc1 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 [2024-12-06 13:07:41.130524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.834 [2024-12-06 13:07:41.130828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.834 [2024-12-06 13:07:41.130876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.834 [2024-12-06 13:07:41.130895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.834 [2024-12-06 13:07:41.134014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.834 pt1 00:13:34.834 [2024-12-06 13:07:41.134207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 malloc2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 [2024-12-06 13:07:41.187371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.834 [2024-12-06 13:07:41.187466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.834 [2024-12-06 13:07:41.187507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.834 [2024-12-06 13:07:41.187524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.834 [2024-12-06 13:07:41.190998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.834 [2024-12-06 13:07:41.191067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.834 pt2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 [2024-12-06 13:07:41.195479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.834 [2024-12-06 13:07:41.198472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.834 [2024-12-06 13:07:41.198765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.834 [2024-12-06 13:07:41.198794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:34.834 [2024-12-06 13:07:41.199234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:34.834 [2024-12-06 13:07:41.199480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.834 [2024-12-06 13:07:41.199504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.834 [2024-12-06 13:07:41.199759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:34.834 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.835 "name": "raid_bdev1", 00:13:34.835 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:34.835 "strip_size_kb": 64, 00:13:34.835 "state": "online", 00:13:34.835 "raid_level": "raid0", 00:13:34.835 "superblock": true, 00:13:34.835 "num_base_bdevs": 2, 00:13:34.835 "num_base_bdevs_discovered": 2, 00:13:34.835 "num_base_bdevs_operational": 2, 00:13:34.835 "base_bdevs_list": [ 00:13:34.835 { 00:13:34.835 "name": "pt1", 00:13:34.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.835 "is_configured": true, 00:13:34.835 "data_offset": 2048, 00:13:34.835 "data_size": 63488 00:13:34.835 }, 00:13:34.835 { 00:13:34.835 "name": "pt2", 00:13:34.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.835 "is_configured": true, 00:13:34.835 "data_offset": 2048, 00:13:34.835 "data_size": 63488 00:13:34.835 } 00:13:34.835 ] 00:13:34.835 }' 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.835 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.400 [2024-12-06 13:07:41.752267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.400 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.400 "name": "raid_bdev1", 00:13:35.400 "aliases": [ 00:13:35.400 "00af98e9-59f8-40f7-bde3-151293f5438c" 00:13:35.400 ], 00:13:35.400 "product_name": "Raid Volume", 00:13:35.400 "block_size": 512, 00:13:35.400 "num_blocks": 126976, 00:13:35.400 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:35.400 "assigned_rate_limits": { 00:13:35.400 "rw_ios_per_sec": 0, 00:13:35.400 "rw_mbytes_per_sec": 0, 00:13:35.400 "r_mbytes_per_sec": 0, 00:13:35.400 "w_mbytes_per_sec": 0 00:13:35.400 }, 00:13:35.400 "claimed": false, 00:13:35.400 "zoned": false, 00:13:35.400 "supported_io_types": { 00:13:35.400 "read": true, 00:13:35.401 "write": true, 00:13:35.401 "unmap": true, 00:13:35.401 "flush": true, 00:13:35.401 "reset": true, 00:13:35.401 "nvme_admin": false, 00:13:35.401 "nvme_io": false, 00:13:35.401 "nvme_io_md": false, 00:13:35.401 "write_zeroes": true, 00:13:35.401 "zcopy": false, 00:13:35.401 "get_zone_info": false, 00:13:35.401 "zone_management": false, 00:13:35.401 "zone_append": false, 00:13:35.401 "compare": false, 00:13:35.401 "compare_and_write": false, 00:13:35.401 "abort": false, 00:13:35.401 "seek_hole": false, 00:13:35.401 "seek_data": false, 00:13:35.401 "copy": false, 00:13:35.401 "nvme_iov_md": false 00:13:35.401 }, 00:13:35.401 "memory_domains": [ 00:13:35.401 { 00:13:35.401 "dma_device_id": "system", 00:13:35.401 "dma_device_type": 1 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.401 "dma_device_type": 2 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "dma_device_id": "system", 00:13:35.401 "dma_device_type": 1 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.401 "dma_device_type": 2 00:13:35.401 } 00:13:35.401 ], 00:13:35.401 "driver_specific": { 00:13:35.401 "raid": { 00:13:35.401 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:35.401 "strip_size_kb": 64, 00:13:35.401 "state": "online", 00:13:35.401 "raid_level": "raid0", 00:13:35.401 "superblock": true, 00:13:35.401 "num_base_bdevs": 2, 00:13:35.401 "num_base_bdevs_discovered": 2, 00:13:35.401 "num_base_bdevs_operational": 2, 00:13:35.401 "base_bdevs_list": [ 00:13:35.401 { 00:13:35.401 "name": "pt1", 00:13:35.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 }, 00:13:35.401 { 00:13:35.401 "name": "pt2", 00:13:35.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.401 "is_configured": true, 00:13:35.401 "data_offset": 2048, 00:13:35.401 "data_size": 63488 00:13:35.401 } 00:13:35.401 ] 00:13:35.401 } 00:13:35.401 } 00:13:35.401 }' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.401 pt2' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.401 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.659 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.659 [2024-12-06 13:07:42.004263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=00af98e9-59f8-40f7-bde3-151293f5438c 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 00af98e9-59f8-40f7-bde3-151293f5438c ']' 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.659 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.659 [2024-12-06 13:07:42.051863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.659 [2024-12-06 13:07:42.051896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.659 [2024-12-06 13:07:42.052012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.659 [2024-12-06 13:07:42.052088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.659 [2024-12-06 13:07:42.052109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.660 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.919 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.920 [2024-12-06 13:07:42.199976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:35.920 [2024-12-06 13:07:42.202855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:35.920 [2024-12-06 13:07:42.203099] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:35.920 [2024-12-06 13:07:42.203189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:35.920 [2024-12-06 13:07:42.203218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.920 [2024-12-06 13:07:42.203238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:35.920 request: 00:13:35.920 { 00:13:35.920 "name": "raid_bdev1", 00:13:35.920 "raid_level": "raid0", 00:13:35.920 "base_bdevs": [ 00:13:35.920 "malloc1", 00:13:35.920 "malloc2" 00:13:35.920 ], 00:13:35.920 "strip_size_kb": 64, 00:13:35.920 "superblock": false, 00:13:35.920 "method": "bdev_raid_create", 00:13:35.920 "req_id": 1 00:13:35.920 } 00:13:35.920 Got JSON-RPC error response 00:13:35.920 response: 00:13:35.920 { 00:13:35.920 "code": -17, 00:13:35.920 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:35.920 } 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.920 [2024-12-06 13:07:42.272016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:35.920 [2024-12-06 13:07:42.272101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.920 [2024-12-06 13:07:42.272130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.920 [2024-12-06 13:07:42.272149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.920 [2024-12-06 13:07:42.275373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.920 [2024-12-06 13:07:42.275428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:35.920 [2024-12-06 13:07:42.275591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:35.920 [2024-12-06 13:07:42.275671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.920 pt1 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.920 "name": "raid_bdev1", 00:13:35.920 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:35.920 "strip_size_kb": 64, 00:13:35.920 "state": "configuring", 00:13:35.920 "raid_level": "raid0", 00:13:35.920 "superblock": true, 00:13:35.920 "num_base_bdevs": 2, 00:13:35.920 "num_base_bdevs_discovered": 1, 00:13:35.920 "num_base_bdevs_operational": 2, 00:13:35.920 "base_bdevs_list": [ 00:13:35.920 { 00:13:35.920 "name": "pt1", 00:13:35.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.920 "is_configured": true, 00:13:35.920 "data_offset": 2048, 00:13:35.920 "data_size": 63488 00:13:35.920 }, 00:13:35.920 { 00:13:35.920 "name": null, 00:13:35.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.920 "is_configured": false, 00:13:35.920 "data_offset": 2048, 00:13:35.920 "data_size": 63488 00:13:35.920 } 00:13:35.920 ] 00:13:35.920 }' 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.920 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.489 [2024-12-06 13:07:42.788203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.489 [2024-12-06 13:07:42.788321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.489 [2024-12-06 13:07:42.788361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:36.489 [2024-12-06 13:07:42.788381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.489 [2024-12-06 13:07:42.789088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.489 [2024-12-06 13:07:42.789131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.489 [2024-12-06 13:07:42.789269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.489 [2024-12-06 13:07:42.789315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.489 [2024-12-06 13:07:42.789492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:36.489 [2024-12-06 13:07:42.789516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:36.489 [2024-12-06 13:07:42.789842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:36.489 [2024-12-06 13:07:42.790043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:36.489 [2024-12-06 13:07:42.790058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:36.489 [2024-12-06 13:07:42.790252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.489 pt2 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.489 "name": "raid_bdev1", 00:13:36.489 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:36.489 "strip_size_kb": 64, 00:13:36.489 "state": "online", 00:13:36.489 "raid_level": "raid0", 00:13:36.489 "superblock": true, 00:13:36.489 "num_base_bdevs": 2, 00:13:36.489 "num_base_bdevs_discovered": 2, 00:13:36.489 "num_base_bdevs_operational": 2, 00:13:36.489 "base_bdevs_list": [ 00:13:36.489 { 00:13:36.489 "name": "pt1", 00:13:36.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.489 "is_configured": true, 00:13:36.489 "data_offset": 2048, 00:13:36.489 "data_size": 63488 00:13:36.489 }, 00:13:36.489 { 00:13:36.489 "name": "pt2", 00:13:36.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.489 "is_configured": true, 00:13:36.489 "data_offset": 2048, 00:13:36.489 "data_size": 63488 00:13:36.489 } 00:13:36.489 ] 00:13:36.489 }' 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.489 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.057 [2024-12-06 13:07:43.308682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.057 "name": "raid_bdev1", 00:13:37.057 "aliases": [ 00:13:37.057 "00af98e9-59f8-40f7-bde3-151293f5438c" 00:13:37.057 ], 00:13:37.057 "product_name": "Raid Volume", 00:13:37.057 "block_size": 512, 00:13:37.057 "num_blocks": 126976, 00:13:37.057 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:37.057 "assigned_rate_limits": { 00:13:37.057 "rw_ios_per_sec": 0, 00:13:37.057 "rw_mbytes_per_sec": 0, 00:13:37.057 "r_mbytes_per_sec": 0, 00:13:37.057 "w_mbytes_per_sec": 0 00:13:37.057 }, 00:13:37.057 "claimed": false, 00:13:37.057 "zoned": false, 00:13:37.057 "supported_io_types": { 00:13:37.057 "read": true, 00:13:37.057 "write": true, 00:13:37.057 "unmap": true, 00:13:37.057 "flush": true, 00:13:37.057 "reset": true, 00:13:37.057 "nvme_admin": false, 00:13:37.057 "nvme_io": false, 00:13:37.057 "nvme_io_md": false, 00:13:37.057 "write_zeroes": true, 00:13:37.057 "zcopy": false, 00:13:37.057 "get_zone_info": false, 00:13:37.057 "zone_management": false, 00:13:37.057 "zone_append": false, 00:13:37.057 "compare": false, 00:13:37.057 "compare_and_write": false, 00:13:37.057 "abort": false, 00:13:37.057 "seek_hole": false, 00:13:37.057 "seek_data": false, 00:13:37.057 "copy": false, 00:13:37.057 "nvme_iov_md": false 00:13:37.057 }, 00:13:37.057 "memory_domains": [ 00:13:37.057 { 00:13:37.057 "dma_device_id": "system", 00:13:37.057 "dma_device_type": 1 00:13:37.057 }, 00:13:37.057 { 00:13:37.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.057 "dma_device_type": 2 00:13:37.057 }, 00:13:37.057 { 00:13:37.057 "dma_device_id": "system", 00:13:37.057 "dma_device_type": 1 00:13:37.057 }, 00:13:37.057 { 00:13:37.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.057 "dma_device_type": 2 00:13:37.057 } 00:13:37.057 ], 00:13:37.057 "driver_specific": { 00:13:37.057 "raid": { 00:13:37.057 "uuid": "00af98e9-59f8-40f7-bde3-151293f5438c", 00:13:37.057 "strip_size_kb": 64, 00:13:37.057 "state": "online", 00:13:37.057 "raid_level": "raid0", 00:13:37.057 "superblock": true, 00:13:37.057 "num_base_bdevs": 2, 00:13:37.057 "num_base_bdevs_discovered": 2, 00:13:37.057 "num_base_bdevs_operational": 2, 00:13:37.057 "base_bdevs_list": [ 00:13:37.057 { 00:13:37.057 "name": "pt1", 00:13:37.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.057 "is_configured": true, 00:13:37.057 "data_offset": 2048, 00:13:37.057 "data_size": 63488 00:13:37.057 }, 00:13:37.057 { 00:13:37.057 "name": "pt2", 00:13:37.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.057 "is_configured": true, 00:13:37.057 "data_offset": 2048, 00:13:37.057 "data_size": 63488 00:13:37.057 } 00:13:37.057 ] 00:13:37.057 } 00:13:37.057 } 00:13:37.057 }' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.057 pt2' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.057 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.057 [2024-12-06 13:07:43.572764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 00af98e9-59f8-40f7-bde3-151293f5438c '!=' 00af98e9-59f8-40f7-bde3-151293f5438c ']' 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61367 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61367 ']' 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61367 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61367 00:13:37.316 killing process with pid 61367 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61367' 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61367 00:13:37.316 [2024-12-06 13:07:43.677466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.316 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61367 00:13:37.316 [2024-12-06 13:07:43.677593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.316 [2024-12-06 13:07:43.677670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.316 [2024-12-06 13:07:43.677692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:37.575 [2024-12-06 13:07:43.873849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.558 13:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:38.558 00:13:38.558 real 0m5.158s 00:13:38.558 user 0m7.575s 00:13:38.558 sys 0m0.781s 00:13:38.558 13:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.558 ************************************ 00:13:38.558 END TEST raid_superblock_test 00:13:38.558 ************************************ 00:13:38.558 13:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.558 13:07:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:13:38.558 13:07:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:38.558 13:07:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.558 13:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.558 ************************************ 00:13:38.558 START TEST raid_read_error_test 00:13:38.558 ************************************ 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:38.558 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:38.816 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:38.816 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:38.816 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.htMDTJ7zkA 00:13:38.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61584 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61584 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61584 ']' 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.817 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 [2024-12-06 13:07:45.211859] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:38.817 [2024-12-06 13:07:45.212066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61584 ] 00:13:39.075 [2024-12-06 13:07:45.404382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.075 [2024-12-06 13:07:45.589773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.332 [2024-12-06 13:07:45.825529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.332 [2024-12-06 13:07:45.825635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 BaseBdev1_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 true 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 [2024-12-06 13:07:46.341334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:39.898 [2024-12-06 13:07:46.341416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.898 [2024-12-06 13:07:46.341461] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:39.898 [2024-12-06 13:07:46.341484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.898 [2024-12-06 13:07:46.344692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.898 [2024-12-06 13:07:46.344758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.898 BaseBdev1 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 BaseBdev2_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 true 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 [2024-12-06 13:07:46.412311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:39.898 [2024-12-06 13:07:46.412433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.898 [2024-12-06 13:07:46.412492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:39.898 [2024-12-06 13:07:46.412519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.898 [2024-12-06 13:07:46.415948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.898 [2024-12-06 13:07:46.416014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:39.898 BaseBdev2 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.898 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.898 [2024-12-06 13:07:46.420454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.898 [2024-12-06 13:07:46.423374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.157 [2024-12-06 13:07:46.423937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:40.157 [2024-12-06 13:07:46.423974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:40.157 [2024-12-06 13:07:46.424314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:40.157 [2024-12-06 13:07:46.424598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:40.157 [2024-12-06 13:07:46.424622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:40.157 [2024-12-06 13:07:46.424882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.157 "name": "raid_bdev1", 00:13:40.157 "uuid": "0443a4bd-65db-4aa7-bd48-0f21ecc6143a", 00:13:40.157 "strip_size_kb": 64, 00:13:40.157 "state": "online", 00:13:40.157 "raid_level": "raid0", 00:13:40.157 "superblock": true, 00:13:40.157 "num_base_bdevs": 2, 00:13:40.157 "num_base_bdevs_discovered": 2, 00:13:40.157 "num_base_bdevs_operational": 2, 00:13:40.157 "base_bdevs_list": [ 00:13:40.157 { 00:13:40.157 "name": "BaseBdev1", 00:13:40.157 "uuid": "21eca13f-eadb-576f-918b-59660e6e9a88", 00:13:40.157 "is_configured": true, 00:13:40.157 "data_offset": 2048, 00:13:40.157 "data_size": 63488 00:13:40.157 }, 00:13:40.157 { 00:13:40.157 "name": "BaseBdev2", 00:13:40.157 "uuid": "98806155-e01a-58ad-b382-6f35bb808dac", 00:13:40.157 "is_configured": true, 00:13:40.157 "data_offset": 2048, 00:13:40.157 "data_size": 63488 00:13:40.157 } 00:13:40.157 ] 00:13:40.157 }' 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.157 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.723 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:40.723 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:40.723 [2024-12-06 13:07:47.114629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.655 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.655 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.655 "name": "raid_bdev1", 00:13:41.655 "uuid": "0443a4bd-65db-4aa7-bd48-0f21ecc6143a", 00:13:41.655 "strip_size_kb": 64, 00:13:41.655 "state": "online", 00:13:41.655 "raid_level": "raid0", 00:13:41.655 "superblock": true, 00:13:41.655 "num_base_bdevs": 2, 00:13:41.655 "num_base_bdevs_discovered": 2, 00:13:41.655 "num_base_bdevs_operational": 2, 00:13:41.655 "base_bdevs_list": [ 00:13:41.655 { 00:13:41.655 "name": "BaseBdev1", 00:13:41.655 "uuid": "21eca13f-eadb-576f-918b-59660e6e9a88", 00:13:41.655 "is_configured": true, 00:13:41.655 "data_offset": 2048, 00:13:41.655 "data_size": 63488 00:13:41.655 }, 00:13:41.655 { 00:13:41.655 "name": "BaseBdev2", 00:13:41.655 "uuid": "98806155-e01a-58ad-b382-6f35bb808dac", 00:13:41.655 "is_configured": true, 00:13:41.655 "data_offset": 2048, 00:13:41.655 "data_size": 63488 00:13:41.655 } 00:13:41.655 ] 00:13:41.655 }' 00:13:41.655 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.655 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.220 [2024-12-06 13:07:48.509294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.220 [2024-12-06 13:07:48.509353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.220 [2024-12-06 13:07:48.513059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.220 [2024-12-06 13:07:48.513120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.220 [2024-12-06 13:07:48.513169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.220 [2024-12-06 13:07:48.513199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:42.220 { 00:13:42.220 "results": [ 00:13:42.220 { 00:13:42.220 "job": "raid_bdev1", 00:13:42.220 "core_mask": "0x1", 00:13:42.220 "workload": "randrw", 00:13:42.220 "percentage": 50, 00:13:42.220 "status": "finished", 00:13:42.220 "queue_depth": 1, 00:13:42.220 "io_size": 131072, 00:13:42.220 "runtime": 1.391663, 00:13:42.220 "iops": 9055.353199732981, 00:13:42.220 "mibps": 1131.9191499666226, 00:13:42.220 "io_failed": 1, 00:13:42.220 "io_timeout": 0, 00:13:42.220 "avg_latency_us": 155.1074090584493, 00:13:42.220 "min_latency_us": 42.589090909090906, 00:13:42.220 "max_latency_us": 1921.3963636363637 00:13:42.220 } 00:13:42.220 ], 00:13:42.220 "core_count": 1 00:13:42.220 } 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61584 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61584 ']' 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61584 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61584 00:13:42.220 killing process with pid 61584 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61584' 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61584 00:13:42.220 [2024-12-06 13:07:48.550177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.220 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61584 00:13:42.220 [2024-12-06 13:07:48.690556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.htMDTJ7zkA 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:43.596 ************************************ 00:13:43.596 END TEST raid_read_error_test 00:13:43.596 ************************************ 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:13:43.596 00:13:43.596 real 0m4.920s 00:13:43.596 user 0m6.137s 00:13:43.596 sys 0m0.637s 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.596 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.596 13:07:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:13:43.596 13:07:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:43.596 13:07:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.596 13:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.596 ************************************ 00:13:43.596 START TEST raid_write_error_test 00:13:43.596 ************************************ 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U6W07oVEtm 00:13:43.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61735 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61735 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61735 ']' 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.596 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.855 [2024-12-06 13:07:50.170534] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:43.855 [2024-12-06 13:07:50.171062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61735 ] 00:13:43.855 [2024-12-06 13:07:50.359419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.113 [2024-12-06 13:07:50.531079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.371 [2024-12-06 13:07:50.798567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.371 [2024-12-06 13:07:50.798938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 BaseBdev1_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 true 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 [2024-12-06 13:07:51.302522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:44.936 [2024-12-06 13:07:51.302802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.936 [2024-12-06 13:07:51.302842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:44.936 [2024-12-06 13:07:51.302861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.936 [2024-12-06 13:07:51.305786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.936 [2024-12-06 13:07:51.305983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.936 BaseBdev1 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 BaseBdev2_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 true 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 [2024-12-06 13:07:51.364194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:44.936 [2024-12-06 13:07:51.364291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.936 [2024-12-06 13:07:51.364320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:44.936 [2024-12-06 13:07:51.364338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.936 [2024-12-06 13:07:51.367385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.936 [2024-12-06 13:07:51.367678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.936 BaseBdev2 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 [2024-12-06 13:07:51.372394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.936 [2024-12-06 13:07:51.374969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.936 [2024-12-06 13:07:51.375371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:44.936 [2024-12-06 13:07:51.375404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:44.936 [2024-12-06 13:07:51.375738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:44.936 [2024-12-06 13:07:51.375968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:44.936 [2024-12-06 13:07:51.375990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:44.936 [2024-12-06 13:07:51.376199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.936 "name": "raid_bdev1", 00:13:44.936 "uuid": "491f7447-5926-408b-9325-cdf99ea37b38", 00:13:44.936 "strip_size_kb": 64, 00:13:44.936 "state": "online", 00:13:44.936 "raid_level": "raid0", 00:13:44.936 "superblock": true, 00:13:44.936 "num_base_bdevs": 2, 00:13:44.936 "num_base_bdevs_discovered": 2, 00:13:44.936 "num_base_bdevs_operational": 2, 00:13:44.936 "base_bdevs_list": [ 00:13:44.936 { 00:13:44.936 "name": "BaseBdev1", 00:13:44.936 "uuid": "e0182e43-0dfa-5389-955f-ad517db2ff5a", 00:13:44.936 "is_configured": true, 00:13:44.936 "data_offset": 2048, 00:13:44.936 "data_size": 63488 00:13:44.936 }, 00:13:44.936 { 00:13:44.936 "name": "BaseBdev2", 00:13:44.936 "uuid": "aae2c209-ca6d-5eb5-84b9-3c0971e65342", 00:13:44.936 "is_configured": true, 00:13:44.936 "data_offset": 2048, 00:13:44.936 "data_size": 63488 00:13:44.936 } 00:13:44.936 ] 00:13:44.936 }' 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.936 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.501 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:45.501 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:45.759 [2024-12-06 13:07:52.030070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.695 "name": "raid_bdev1", 00:13:46.695 "uuid": "491f7447-5926-408b-9325-cdf99ea37b38", 00:13:46.695 "strip_size_kb": 64, 00:13:46.695 "state": "online", 00:13:46.695 "raid_level": "raid0", 00:13:46.695 "superblock": true, 00:13:46.695 "num_base_bdevs": 2, 00:13:46.695 "num_base_bdevs_discovered": 2, 00:13:46.695 "num_base_bdevs_operational": 2, 00:13:46.695 "base_bdevs_list": [ 00:13:46.695 { 00:13:46.695 "name": "BaseBdev1", 00:13:46.695 "uuid": "e0182e43-0dfa-5389-955f-ad517db2ff5a", 00:13:46.695 "is_configured": true, 00:13:46.695 "data_offset": 2048, 00:13:46.695 "data_size": 63488 00:13:46.695 }, 00:13:46.695 { 00:13:46.695 "name": "BaseBdev2", 00:13:46.695 "uuid": "aae2c209-ca6d-5eb5-84b9-3c0971e65342", 00:13:46.695 "is_configured": true, 00:13:46.695 "data_offset": 2048, 00:13:46.695 "data_size": 63488 00:13:46.695 } 00:13:46.695 ] 00:13:46.695 }' 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.695 13:07:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.954 [2024-12-06 13:07:53.383676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.954 [2024-12-06 13:07:53.383975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.954 { 00:13:46.954 "results": [ 00:13:46.954 { 00:13:46.954 "job": "raid_bdev1", 00:13:46.954 "core_mask": "0x1", 00:13:46.954 "workload": "randrw", 00:13:46.954 "percentage": 50, 00:13:46.954 "status": "finished", 00:13:46.954 "queue_depth": 1, 00:13:46.954 "io_size": 131072, 00:13:46.954 "runtime": 1.351602, 00:13:46.954 "iops": 9782.465548290103, 00:13:46.954 "mibps": 1222.808193536263, 00:13:46.954 "io_failed": 1, 00:13:46.954 "io_timeout": 0, 00:13:46.954 "avg_latency_us": 142.6700110688676, 00:13:46.954 "min_latency_us": 41.658181818181816, 00:13:46.954 "max_latency_us": 1846.9236363636364 00:13:46.954 } 00:13:46.954 ], 00:13:46.954 "core_count": 1 00:13:46.954 } 00:13:46.954 [2024-12-06 13:07:53.387692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.954 [2024-12-06 13:07:53.387819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.954 [2024-12-06 13:07:53.387873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.954 [2024-12-06 13:07:53.387892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61735 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61735 ']' 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61735 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61735 00:13:46.954 killing process with pid 61735 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61735' 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61735 00:13:46.954 [2024-12-06 13:07:53.428176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.954 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61735 00:13:47.213 [2024-12-06 13:07:53.595730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U6W07oVEtm 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:48.587 ************************************ 00:13:48.587 END TEST raid_write_error_test 00:13:48.587 ************************************ 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:48.587 00:13:48.587 real 0m4.904s 00:13:48.587 user 0m6.065s 00:13:48.587 sys 0m0.612s 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.587 13:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.587 13:07:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:48.587 13:07:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:48.587 13:07:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:48.587 13:07:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.587 13:07:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.587 ************************************ 00:13:48.587 START TEST raid_state_function_test 00:13:48.587 ************************************ 00:13:48.587 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:13:48.587 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:48.587 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.588 Process raid pid: 61884 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61884 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61884' 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61884 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61884 ']' 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.588 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.845 [2024-12-06 13:07:55.119279] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:48.845 [2024-12-06 13:07:55.120485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.845 [2024-12-06 13:07:55.301933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.103 [2024-12-06 13:07:55.453128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.362 [2024-12-06 13:07:55.684780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.362 [2024-12-06 13:07:55.685211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.621 [2024-12-06 13:07:56.091680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.621 [2024-12-06 13:07:56.091767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.621 [2024-12-06 13:07:56.091785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.621 [2024-12-06 13:07:56.091803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.621 "name": "Existed_Raid", 00:13:49.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.621 "strip_size_kb": 64, 00:13:49.621 "state": "configuring", 00:13:49.621 "raid_level": "concat", 00:13:49.621 "superblock": false, 00:13:49.621 "num_base_bdevs": 2, 00:13:49.621 "num_base_bdevs_discovered": 0, 00:13:49.621 "num_base_bdevs_operational": 2, 00:13:49.621 "base_bdevs_list": [ 00:13:49.621 { 00:13:49.621 "name": "BaseBdev1", 00:13:49.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.621 "is_configured": false, 00:13:49.621 "data_offset": 0, 00:13:49.621 "data_size": 0 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "name": "BaseBdev2", 00:13:49.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.621 "is_configured": false, 00:13:49.621 "data_offset": 0, 00:13:49.621 "data_size": 0 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }' 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.621 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 [2024-12-06 13:07:56.587800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.187 [2024-12-06 13:07:56.587867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 [2024-12-06 13:07:56.595761] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.187 [2024-12-06 13:07:56.595834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.187 [2024-12-06 13:07:56.595866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.187 [2024-12-06 13:07:56.595886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 [2024-12-06 13:07:56.645043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.187 BaseBdev1 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.187 [ 00:13:50.187 { 00:13:50.187 "name": "BaseBdev1", 00:13:50.187 "aliases": [ 00:13:50.187 "b7f4e2ab-3258-4477-93c3-741127febe78" 00:13:50.187 ], 00:13:50.187 "product_name": "Malloc disk", 00:13:50.187 "block_size": 512, 00:13:50.187 "num_blocks": 65536, 00:13:50.187 "uuid": "b7f4e2ab-3258-4477-93c3-741127febe78", 00:13:50.187 "assigned_rate_limits": { 00:13:50.187 "rw_ios_per_sec": 0, 00:13:50.187 "rw_mbytes_per_sec": 0, 00:13:50.187 "r_mbytes_per_sec": 0, 00:13:50.187 "w_mbytes_per_sec": 0 00:13:50.187 }, 00:13:50.187 "claimed": true, 00:13:50.187 "claim_type": "exclusive_write", 00:13:50.187 "zoned": false, 00:13:50.187 "supported_io_types": { 00:13:50.187 "read": true, 00:13:50.187 "write": true, 00:13:50.187 "unmap": true, 00:13:50.187 "flush": true, 00:13:50.187 "reset": true, 00:13:50.187 "nvme_admin": false, 00:13:50.187 "nvme_io": false, 00:13:50.187 "nvme_io_md": false, 00:13:50.187 "write_zeroes": true, 00:13:50.187 "zcopy": true, 00:13:50.187 "get_zone_info": false, 00:13:50.187 "zone_management": false, 00:13:50.187 "zone_append": false, 00:13:50.187 "compare": false, 00:13:50.187 "compare_and_write": false, 00:13:50.187 "abort": true, 00:13:50.187 "seek_hole": false, 00:13:50.187 "seek_data": false, 00:13:50.187 "copy": true, 00:13:50.187 "nvme_iov_md": false 00:13:50.187 }, 00:13:50.187 "memory_domains": [ 00:13:50.187 { 00:13:50.187 "dma_device_id": "system", 00:13:50.187 "dma_device_type": 1 00:13:50.187 }, 00:13:50.187 { 00:13:50.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.187 "dma_device_type": 2 00:13:50.187 } 00:13:50.187 ], 00:13:50.187 "driver_specific": {} 00:13:50.187 } 00:13:50.187 ] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.187 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.188 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.446 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.446 "name": "Existed_Raid", 00:13:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.446 "strip_size_kb": 64, 00:13:50.446 "state": "configuring", 00:13:50.446 "raid_level": "concat", 00:13:50.446 "superblock": false, 00:13:50.446 "num_base_bdevs": 2, 00:13:50.446 "num_base_bdevs_discovered": 1, 00:13:50.446 "num_base_bdevs_operational": 2, 00:13:50.446 "base_bdevs_list": [ 00:13:50.446 { 00:13:50.446 "name": "BaseBdev1", 00:13:50.446 "uuid": "b7f4e2ab-3258-4477-93c3-741127febe78", 00:13:50.446 "is_configured": true, 00:13:50.446 "data_offset": 0, 00:13:50.446 "data_size": 65536 00:13:50.446 }, 00:13:50.446 { 00:13:50.446 "name": "BaseBdev2", 00:13:50.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.446 "is_configured": false, 00:13:50.446 "data_offset": 0, 00:13:50.446 "data_size": 0 00:13:50.446 } 00:13:50.446 ] 00:13:50.446 }' 00:13:50.446 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.446 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.705 [2024-12-06 13:07:57.165279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.705 [2024-12-06 13:07:57.165368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.705 [2024-12-06 13:07:57.177297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.705 [2024-12-06 13:07:57.180021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.705 [2024-12-06 13:07:57.180196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.705 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.966 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.966 "name": "Existed_Raid", 00:13:50.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.966 "strip_size_kb": 64, 00:13:50.966 "state": "configuring", 00:13:50.966 "raid_level": "concat", 00:13:50.966 "superblock": false, 00:13:50.966 "num_base_bdevs": 2, 00:13:50.966 "num_base_bdevs_discovered": 1, 00:13:50.966 "num_base_bdevs_operational": 2, 00:13:50.966 "base_bdevs_list": [ 00:13:50.966 { 00:13:50.966 "name": "BaseBdev1", 00:13:50.966 "uuid": "b7f4e2ab-3258-4477-93c3-741127febe78", 00:13:50.966 "is_configured": true, 00:13:50.966 "data_offset": 0, 00:13:50.966 "data_size": 65536 00:13:50.966 }, 00:13:50.966 { 00:13:50.966 "name": "BaseBdev2", 00:13:50.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.966 "is_configured": false, 00:13:50.966 "data_offset": 0, 00:13:50.966 "data_size": 0 00:13:50.966 } 00:13:50.966 ] 00:13:50.966 }' 00:13:50.966 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.966 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.225 [2024-12-06 13:07:57.727788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.225 [2024-12-06 13:07:57.727856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:51.225 [2024-12-06 13:07:57.727869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:51.225 [2024-12-06 13:07:57.728223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:51.225 [2024-12-06 13:07:57.728445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:51.225 [2024-12-06 13:07:57.728466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:51.225 [2024-12-06 13:07:57.728853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.225 BaseBdev2 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.225 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.484 [ 00:13:51.484 { 00:13:51.484 "name": "BaseBdev2", 00:13:51.484 "aliases": [ 00:13:51.484 "0ec9bad2-7725-49a4-81a7-7f873ea8248a" 00:13:51.484 ], 00:13:51.484 "product_name": "Malloc disk", 00:13:51.484 "block_size": 512, 00:13:51.484 "num_blocks": 65536, 00:13:51.484 "uuid": "0ec9bad2-7725-49a4-81a7-7f873ea8248a", 00:13:51.484 "assigned_rate_limits": { 00:13:51.484 "rw_ios_per_sec": 0, 00:13:51.484 "rw_mbytes_per_sec": 0, 00:13:51.484 "r_mbytes_per_sec": 0, 00:13:51.484 "w_mbytes_per_sec": 0 00:13:51.484 }, 00:13:51.484 "claimed": true, 00:13:51.484 "claim_type": "exclusive_write", 00:13:51.484 "zoned": false, 00:13:51.484 "supported_io_types": { 00:13:51.484 "read": true, 00:13:51.484 "write": true, 00:13:51.484 "unmap": true, 00:13:51.484 "flush": true, 00:13:51.484 "reset": true, 00:13:51.484 "nvme_admin": false, 00:13:51.484 "nvme_io": false, 00:13:51.484 "nvme_io_md": false, 00:13:51.484 "write_zeroes": true, 00:13:51.484 "zcopy": true, 00:13:51.484 "get_zone_info": false, 00:13:51.484 "zone_management": false, 00:13:51.484 "zone_append": false, 00:13:51.484 "compare": false, 00:13:51.484 "compare_and_write": false, 00:13:51.484 "abort": true, 00:13:51.484 "seek_hole": false, 00:13:51.484 "seek_data": false, 00:13:51.484 "copy": true, 00:13:51.484 "nvme_iov_md": false 00:13:51.484 }, 00:13:51.484 "memory_domains": [ 00:13:51.484 { 00:13:51.484 "dma_device_id": "system", 00:13:51.484 "dma_device_type": 1 00:13:51.484 }, 00:13:51.484 { 00:13:51.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.484 "dma_device_type": 2 00:13:51.484 } 00:13:51.484 ], 00:13:51.484 "driver_specific": {} 00:13:51.484 } 00:13:51.484 ] 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.484 "name": "Existed_Raid", 00:13:51.484 "uuid": "04049a61-09ca-4bd8-addd-6dc5f48f6d63", 00:13:51.484 "strip_size_kb": 64, 00:13:51.484 "state": "online", 00:13:51.484 "raid_level": "concat", 00:13:51.484 "superblock": false, 00:13:51.484 "num_base_bdevs": 2, 00:13:51.484 "num_base_bdevs_discovered": 2, 00:13:51.484 "num_base_bdevs_operational": 2, 00:13:51.484 "base_bdevs_list": [ 00:13:51.484 { 00:13:51.484 "name": "BaseBdev1", 00:13:51.484 "uuid": "b7f4e2ab-3258-4477-93c3-741127febe78", 00:13:51.484 "is_configured": true, 00:13:51.484 "data_offset": 0, 00:13:51.484 "data_size": 65536 00:13:51.484 }, 00:13:51.484 { 00:13:51.484 "name": "BaseBdev2", 00:13:51.484 "uuid": "0ec9bad2-7725-49a4-81a7-7f873ea8248a", 00:13:51.484 "is_configured": true, 00:13:51.484 "data_offset": 0, 00:13:51.484 "data_size": 65536 00:13:51.484 } 00:13:51.484 ] 00:13:51.484 }' 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.484 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.052 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:52.052 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:52.053 [2024-12-06 13:07:58.328380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:52.053 "name": "Existed_Raid", 00:13:52.053 "aliases": [ 00:13:52.053 "04049a61-09ca-4bd8-addd-6dc5f48f6d63" 00:13:52.053 ], 00:13:52.053 "product_name": "Raid Volume", 00:13:52.053 "block_size": 512, 00:13:52.053 "num_blocks": 131072, 00:13:52.053 "uuid": "04049a61-09ca-4bd8-addd-6dc5f48f6d63", 00:13:52.053 "assigned_rate_limits": { 00:13:52.053 "rw_ios_per_sec": 0, 00:13:52.053 "rw_mbytes_per_sec": 0, 00:13:52.053 "r_mbytes_per_sec": 0, 00:13:52.053 "w_mbytes_per_sec": 0 00:13:52.053 }, 00:13:52.053 "claimed": false, 00:13:52.053 "zoned": false, 00:13:52.053 "supported_io_types": { 00:13:52.053 "read": true, 00:13:52.053 "write": true, 00:13:52.053 "unmap": true, 00:13:52.053 "flush": true, 00:13:52.053 "reset": true, 00:13:52.053 "nvme_admin": false, 00:13:52.053 "nvme_io": false, 00:13:52.053 "nvme_io_md": false, 00:13:52.053 "write_zeroes": true, 00:13:52.053 "zcopy": false, 00:13:52.053 "get_zone_info": false, 00:13:52.053 "zone_management": false, 00:13:52.053 "zone_append": false, 00:13:52.053 "compare": false, 00:13:52.053 "compare_and_write": false, 00:13:52.053 "abort": false, 00:13:52.053 "seek_hole": false, 00:13:52.053 "seek_data": false, 00:13:52.053 "copy": false, 00:13:52.053 "nvme_iov_md": false 00:13:52.053 }, 00:13:52.053 "memory_domains": [ 00:13:52.053 { 00:13:52.053 "dma_device_id": "system", 00:13:52.053 "dma_device_type": 1 00:13:52.053 }, 00:13:52.053 { 00:13:52.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.053 "dma_device_type": 2 00:13:52.053 }, 00:13:52.053 { 00:13:52.053 "dma_device_id": "system", 00:13:52.053 "dma_device_type": 1 00:13:52.053 }, 00:13:52.053 { 00:13:52.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.053 "dma_device_type": 2 00:13:52.053 } 00:13:52.053 ], 00:13:52.053 "driver_specific": { 00:13:52.053 "raid": { 00:13:52.053 "uuid": "04049a61-09ca-4bd8-addd-6dc5f48f6d63", 00:13:52.053 "strip_size_kb": 64, 00:13:52.053 "state": "online", 00:13:52.053 "raid_level": "concat", 00:13:52.053 "superblock": false, 00:13:52.053 "num_base_bdevs": 2, 00:13:52.053 "num_base_bdevs_discovered": 2, 00:13:52.053 "num_base_bdevs_operational": 2, 00:13:52.053 "base_bdevs_list": [ 00:13:52.053 { 00:13:52.053 "name": "BaseBdev1", 00:13:52.053 "uuid": "b7f4e2ab-3258-4477-93c3-741127febe78", 00:13:52.053 "is_configured": true, 00:13:52.053 "data_offset": 0, 00:13:52.053 "data_size": 65536 00:13:52.053 }, 00:13:52.053 { 00:13:52.053 "name": "BaseBdev2", 00:13:52.053 "uuid": "0ec9bad2-7725-49a4-81a7-7f873ea8248a", 00:13:52.053 "is_configured": true, 00:13:52.053 "data_offset": 0, 00:13:52.053 "data_size": 65536 00:13:52.053 } 00:13:52.053 ] 00:13:52.053 } 00:13:52.053 } 00:13:52.053 }' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:52.053 BaseBdev2' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.053 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.313 [2024-12-06 13:07:58.584119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.313 [2024-12-06 13:07:58.584292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.313 [2024-12-06 13:07:58.584499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.313 "name": "Existed_Raid", 00:13:52.313 "uuid": "04049a61-09ca-4bd8-addd-6dc5f48f6d63", 00:13:52.313 "strip_size_kb": 64, 00:13:52.313 "state": "offline", 00:13:52.313 "raid_level": "concat", 00:13:52.313 "superblock": false, 00:13:52.313 "num_base_bdevs": 2, 00:13:52.313 "num_base_bdevs_discovered": 1, 00:13:52.313 "num_base_bdevs_operational": 1, 00:13:52.313 "base_bdevs_list": [ 00:13:52.313 { 00:13:52.313 "name": null, 00:13:52.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.313 "is_configured": false, 00:13:52.313 "data_offset": 0, 00:13:52.313 "data_size": 65536 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "name": "BaseBdev2", 00:13:52.313 "uuid": "0ec9bad2-7725-49a4-81a7-7f873ea8248a", 00:13:52.313 "is_configured": true, 00:13:52.313 "data_offset": 0, 00:13:52.313 "data_size": 65536 00:13:52.313 } 00:13:52.313 ] 00:13:52.313 }' 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.313 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 [2024-12-06 13:07:59.249496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.880 [2024-12-06 13:07:59.249577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61884 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61884 ']' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61884 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.880 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61884 00:13:53.139 killing process with pid 61884 00:13:53.139 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.139 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.139 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61884' 00:13:53.139 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61884 00:13:53.139 [2024-12-06 13:07:59.427550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.139 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61884 00:13:53.139 [2024-12-06 13:07:59.442821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.073 13:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.073 00:13:54.073 real 0m5.578s 00:13:54.073 user 0m8.284s 00:13:54.073 sys 0m0.831s 00:13:54.073 ************************************ 00:13:54.073 END TEST raid_state_function_test 00:13:54.073 ************************************ 00:13:54.073 13:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.073 13:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 13:08:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:54.331 13:08:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.331 13:08:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.331 13:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 ************************************ 00:13:54.331 START TEST raid_state_function_test_sb 00:13:54.331 ************************************ 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:54.331 Process raid pid: 62143 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62143 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62143' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62143 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62143 ']' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.331 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.331 [2024-12-06 13:08:00.761929] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:13:54.331 [2024-12-06 13:08:00.762168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.590 [2024-12-06 13:08:00.956825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.848 [2024-12-06 13:08:01.129297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.106 [2024-12-06 13:08:01.378872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.106 [2024-12-06 13:08:01.379151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.671 [2024-12-06 13:08:01.927380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.671 [2024-12-06 13:08:01.927653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.671 [2024-12-06 13:08:01.927700] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.671 [2024-12-06 13:08:01.927739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.671 "name": "Existed_Raid", 00:13:55.671 "uuid": "6152d42d-e8ff-4456-bdef-c9ebc8daf730", 00:13:55.671 "strip_size_kb": 64, 00:13:55.671 "state": "configuring", 00:13:55.671 "raid_level": "concat", 00:13:55.671 "superblock": true, 00:13:55.671 "num_base_bdevs": 2, 00:13:55.671 "num_base_bdevs_discovered": 0, 00:13:55.671 "num_base_bdevs_operational": 2, 00:13:55.671 "base_bdevs_list": [ 00:13:55.671 { 00:13:55.671 "name": "BaseBdev1", 00:13:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.671 "is_configured": false, 00:13:55.671 "data_offset": 0, 00:13:55.671 "data_size": 0 00:13:55.671 }, 00:13:55.671 { 00:13:55.671 "name": "BaseBdev2", 00:13:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.671 "is_configured": false, 00:13:55.671 "data_offset": 0, 00:13:55.671 "data_size": 0 00:13:55.671 } 00:13:55.671 ] 00:13:55.671 }' 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.671 13:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.928 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.928 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.928 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 [2024-12-06 13:08:02.455423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.186 [2024-12-06 13:08:02.455631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 [2024-12-06 13:08:02.463418] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.186 [2024-12-06 13:08:02.463618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.186 [2024-12-06 13:08:02.463646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.186 [2024-12-06 13:08:02.463670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 [2024-12-06 13:08:02.514488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.186 BaseBdev1 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 [ 00:13:56.186 { 00:13:56.186 "name": "BaseBdev1", 00:13:56.186 "aliases": [ 00:13:56.186 "01a7e988-8d77-4358-b2f5-ad990972b599" 00:13:56.186 ], 00:13:56.186 "product_name": "Malloc disk", 00:13:56.186 "block_size": 512, 00:13:56.186 "num_blocks": 65536, 00:13:56.186 "uuid": "01a7e988-8d77-4358-b2f5-ad990972b599", 00:13:56.186 "assigned_rate_limits": { 00:13:56.186 "rw_ios_per_sec": 0, 00:13:56.186 "rw_mbytes_per_sec": 0, 00:13:56.186 "r_mbytes_per_sec": 0, 00:13:56.186 "w_mbytes_per_sec": 0 00:13:56.186 }, 00:13:56.186 "claimed": true, 00:13:56.186 "claim_type": "exclusive_write", 00:13:56.186 "zoned": false, 00:13:56.186 "supported_io_types": { 00:13:56.186 "read": true, 00:13:56.186 "write": true, 00:13:56.186 "unmap": true, 00:13:56.186 "flush": true, 00:13:56.186 "reset": true, 00:13:56.186 "nvme_admin": false, 00:13:56.186 "nvme_io": false, 00:13:56.186 "nvme_io_md": false, 00:13:56.186 "write_zeroes": true, 00:13:56.186 "zcopy": true, 00:13:56.186 "get_zone_info": false, 00:13:56.186 "zone_management": false, 00:13:56.186 "zone_append": false, 00:13:56.186 "compare": false, 00:13:56.186 "compare_and_write": false, 00:13:56.186 "abort": true, 00:13:56.186 "seek_hole": false, 00:13:56.186 "seek_data": false, 00:13:56.186 "copy": true, 00:13:56.186 "nvme_iov_md": false 00:13:56.186 }, 00:13:56.186 "memory_domains": [ 00:13:56.186 { 00:13:56.186 "dma_device_id": "system", 00:13:56.186 "dma_device_type": 1 00:13:56.186 }, 00:13:56.186 { 00:13:56.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.186 "dma_device_type": 2 00:13:56.186 } 00:13:56.186 ], 00:13:56.186 "driver_specific": {} 00:13:56.186 } 00:13:56.186 ] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.186 "name": "Existed_Raid", 00:13:56.186 "uuid": "58db7d1e-bb01-4b17-bffc-4d7a9c636e39", 00:13:56.186 "strip_size_kb": 64, 00:13:56.186 "state": "configuring", 00:13:56.186 "raid_level": "concat", 00:13:56.186 "superblock": true, 00:13:56.186 "num_base_bdevs": 2, 00:13:56.186 "num_base_bdevs_discovered": 1, 00:13:56.186 "num_base_bdevs_operational": 2, 00:13:56.186 "base_bdevs_list": [ 00:13:56.186 { 00:13:56.186 "name": "BaseBdev1", 00:13:56.186 "uuid": "01a7e988-8d77-4358-b2f5-ad990972b599", 00:13:56.186 "is_configured": true, 00:13:56.186 "data_offset": 2048, 00:13:56.186 "data_size": 63488 00:13:56.186 }, 00:13:56.186 { 00:13:56.186 "name": "BaseBdev2", 00:13:56.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.186 "is_configured": false, 00:13:56.186 "data_offset": 0, 00:13:56.186 "data_size": 0 00:13:56.186 } 00:13:56.186 ] 00:13:56.186 }' 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.186 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.752 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.752 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.752 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.752 [2024-12-06 13:08:03.082708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.752 [2024-12-06 13:08:03.082951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.753 [2024-12-06 13:08:03.090742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.753 [2024-12-06 13:08:03.093425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.753 [2024-12-06 13:08:03.093621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.753 "name": "Existed_Raid", 00:13:56.753 "uuid": "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df", 00:13:56.753 "strip_size_kb": 64, 00:13:56.753 "state": "configuring", 00:13:56.753 "raid_level": "concat", 00:13:56.753 "superblock": true, 00:13:56.753 "num_base_bdevs": 2, 00:13:56.753 "num_base_bdevs_discovered": 1, 00:13:56.753 "num_base_bdevs_operational": 2, 00:13:56.753 "base_bdevs_list": [ 00:13:56.753 { 00:13:56.753 "name": "BaseBdev1", 00:13:56.753 "uuid": "01a7e988-8d77-4358-b2f5-ad990972b599", 00:13:56.753 "is_configured": true, 00:13:56.753 "data_offset": 2048, 00:13:56.753 "data_size": 63488 00:13:56.753 }, 00:13:56.753 { 00:13:56.753 "name": "BaseBdev2", 00:13:56.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.753 "is_configured": false, 00:13:56.753 "data_offset": 0, 00:13:56.753 "data_size": 0 00:13:56.753 } 00:13:56.753 ] 00:13:56.753 }' 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.753 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 [2024-12-06 13:08:03.665152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.320 [2024-12-06 13:08:03.665713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:57.320 [2024-12-06 13:08:03.665875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.320 BaseBdev2 00:13:57.320 [2024-12-06 13:08:03.666295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.320 [2024-12-06 13:08:03.666669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:57.320 [2024-12-06 13:08:03.666697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:57.320 [2024-12-06 13:08:03.666887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.320 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 [ 00:13:57.320 { 00:13:57.320 "name": "BaseBdev2", 00:13:57.320 "aliases": [ 00:13:57.320 "dd642c2c-5455-472d-807d-349478a5f0a8" 00:13:57.320 ], 00:13:57.320 "product_name": "Malloc disk", 00:13:57.320 "block_size": 512, 00:13:57.320 "num_blocks": 65536, 00:13:57.320 "uuid": "dd642c2c-5455-472d-807d-349478a5f0a8", 00:13:57.320 "assigned_rate_limits": { 00:13:57.320 "rw_ios_per_sec": 0, 00:13:57.320 "rw_mbytes_per_sec": 0, 00:13:57.320 "r_mbytes_per_sec": 0, 00:13:57.320 "w_mbytes_per_sec": 0 00:13:57.320 }, 00:13:57.320 "claimed": true, 00:13:57.320 "claim_type": "exclusive_write", 00:13:57.320 "zoned": false, 00:13:57.320 "supported_io_types": { 00:13:57.320 "read": true, 00:13:57.320 "write": true, 00:13:57.320 "unmap": true, 00:13:57.320 "flush": true, 00:13:57.320 "reset": true, 00:13:57.320 "nvme_admin": false, 00:13:57.320 "nvme_io": false, 00:13:57.320 "nvme_io_md": false, 00:13:57.320 "write_zeroes": true, 00:13:57.320 "zcopy": true, 00:13:57.320 "get_zone_info": false, 00:13:57.320 "zone_management": false, 00:13:57.320 "zone_append": false, 00:13:57.320 "compare": false, 00:13:57.320 "compare_and_write": false, 00:13:57.320 "abort": true, 00:13:57.320 "seek_hole": false, 00:13:57.320 "seek_data": false, 00:13:57.320 "copy": true, 00:13:57.320 "nvme_iov_md": false 00:13:57.320 }, 00:13:57.320 "memory_domains": [ 00:13:57.320 { 00:13:57.320 "dma_device_id": "system", 00:13:57.320 "dma_device_type": 1 00:13:57.320 }, 00:13:57.320 { 00:13:57.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.320 "dma_device_type": 2 00:13:57.320 } 00:13:57.320 ], 00:13:57.320 "driver_specific": {} 00:13:57.320 } 00:13:57.320 ] 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.321 "name": "Existed_Raid", 00:13:57.321 "uuid": "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df", 00:13:57.321 "strip_size_kb": 64, 00:13:57.321 "state": "online", 00:13:57.321 "raid_level": "concat", 00:13:57.321 "superblock": true, 00:13:57.321 "num_base_bdevs": 2, 00:13:57.321 "num_base_bdevs_discovered": 2, 00:13:57.321 "num_base_bdevs_operational": 2, 00:13:57.321 "base_bdevs_list": [ 00:13:57.321 { 00:13:57.321 "name": "BaseBdev1", 00:13:57.321 "uuid": "01a7e988-8d77-4358-b2f5-ad990972b599", 00:13:57.321 "is_configured": true, 00:13:57.321 "data_offset": 2048, 00:13:57.321 "data_size": 63488 00:13:57.321 }, 00:13:57.321 { 00:13:57.321 "name": "BaseBdev2", 00:13:57.321 "uuid": "dd642c2c-5455-472d-807d-349478a5f0a8", 00:13:57.321 "is_configured": true, 00:13:57.321 "data_offset": 2048, 00:13:57.321 "data_size": 63488 00:13:57.321 } 00:13:57.321 ] 00:13:57.321 }' 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.321 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:57.888 [2024-12-06 13:08:04.221732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:57.888 "name": "Existed_Raid", 00:13:57.888 "aliases": [ 00:13:57.888 "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df" 00:13:57.888 ], 00:13:57.888 "product_name": "Raid Volume", 00:13:57.888 "block_size": 512, 00:13:57.888 "num_blocks": 126976, 00:13:57.888 "uuid": "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df", 00:13:57.888 "assigned_rate_limits": { 00:13:57.888 "rw_ios_per_sec": 0, 00:13:57.888 "rw_mbytes_per_sec": 0, 00:13:57.888 "r_mbytes_per_sec": 0, 00:13:57.888 "w_mbytes_per_sec": 0 00:13:57.888 }, 00:13:57.888 "claimed": false, 00:13:57.888 "zoned": false, 00:13:57.888 "supported_io_types": { 00:13:57.888 "read": true, 00:13:57.888 "write": true, 00:13:57.888 "unmap": true, 00:13:57.888 "flush": true, 00:13:57.888 "reset": true, 00:13:57.888 "nvme_admin": false, 00:13:57.888 "nvme_io": false, 00:13:57.888 "nvme_io_md": false, 00:13:57.888 "write_zeroes": true, 00:13:57.888 "zcopy": false, 00:13:57.888 "get_zone_info": false, 00:13:57.888 "zone_management": false, 00:13:57.888 "zone_append": false, 00:13:57.888 "compare": false, 00:13:57.888 "compare_and_write": false, 00:13:57.888 "abort": false, 00:13:57.888 "seek_hole": false, 00:13:57.888 "seek_data": false, 00:13:57.888 "copy": false, 00:13:57.888 "nvme_iov_md": false 00:13:57.888 }, 00:13:57.888 "memory_domains": [ 00:13:57.888 { 00:13:57.888 "dma_device_id": "system", 00:13:57.888 "dma_device_type": 1 00:13:57.888 }, 00:13:57.888 { 00:13:57.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.888 "dma_device_type": 2 00:13:57.888 }, 00:13:57.888 { 00:13:57.888 "dma_device_id": "system", 00:13:57.888 "dma_device_type": 1 00:13:57.888 }, 00:13:57.888 { 00:13:57.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.888 "dma_device_type": 2 00:13:57.888 } 00:13:57.888 ], 00:13:57.888 "driver_specific": { 00:13:57.888 "raid": { 00:13:57.888 "uuid": "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df", 00:13:57.888 "strip_size_kb": 64, 00:13:57.888 "state": "online", 00:13:57.888 "raid_level": "concat", 00:13:57.888 "superblock": true, 00:13:57.888 "num_base_bdevs": 2, 00:13:57.888 "num_base_bdevs_discovered": 2, 00:13:57.888 "num_base_bdevs_operational": 2, 00:13:57.888 "base_bdevs_list": [ 00:13:57.888 { 00:13:57.888 "name": "BaseBdev1", 00:13:57.888 "uuid": "01a7e988-8d77-4358-b2f5-ad990972b599", 00:13:57.888 "is_configured": true, 00:13:57.888 "data_offset": 2048, 00:13:57.888 "data_size": 63488 00:13:57.888 }, 00:13:57.888 { 00:13:57.888 "name": "BaseBdev2", 00:13:57.888 "uuid": "dd642c2c-5455-472d-807d-349478a5f0a8", 00:13:57.888 "is_configured": true, 00:13:57.888 "data_offset": 2048, 00:13:57.888 "data_size": 63488 00:13:57.888 } 00:13:57.888 ] 00:13:57.888 } 00:13:57.888 } 00:13:57.888 }' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:57.888 BaseBdev2' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.888 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.147 [2024-12-06 13:08:04.501512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.147 [2024-12-06 13:08:04.501567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.147 [2024-12-06 13:08:04.501645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:58.147 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.148 "name": "Existed_Raid", 00:13:58.148 "uuid": "a5e0853c-18eb-42f4-b9e1-9a842c4ff3df", 00:13:58.148 "strip_size_kb": 64, 00:13:58.148 "state": "offline", 00:13:58.148 "raid_level": "concat", 00:13:58.148 "superblock": true, 00:13:58.148 "num_base_bdevs": 2, 00:13:58.148 "num_base_bdevs_discovered": 1, 00:13:58.148 "num_base_bdevs_operational": 1, 00:13:58.148 "base_bdevs_list": [ 00:13:58.148 { 00:13:58.148 "name": null, 00:13:58.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.148 "is_configured": false, 00:13:58.148 "data_offset": 0, 00:13:58.148 "data_size": 63488 00:13:58.148 }, 00:13:58.148 { 00:13:58.148 "name": "BaseBdev2", 00:13:58.148 "uuid": "dd642c2c-5455-472d-807d-349478a5f0a8", 00:13:58.148 "is_configured": true, 00:13:58.148 "data_offset": 2048, 00:13:58.148 "data_size": 63488 00:13:58.148 } 00:13:58.148 ] 00:13:58.148 }' 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.148 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.723 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.723 [2024-12-06 13:08:05.189378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:58.723 [2024-12-06 13:08:05.189458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62143 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62143 ']' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62143 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62143 00:13:58.981 killing process with pid 62143 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62143' 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62143 00:13:58.981 [2024-12-06 13:08:05.363406] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.981 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62143 00:13:58.981 [2024-12-06 13:08:05.379551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.356 13:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:00.356 00:14:00.356 real 0m5.868s 00:14:00.356 user 0m8.859s 00:14:00.356 sys 0m0.843s 00:14:00.356 ************************************ 00:14:00.356 END TEST raid_state_function_test_sb 00:14:00.356 ************************************ 00:14:00.356 13:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.356 13:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.356 13:08:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:00.356 13:08:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:00.356 13:08:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.356 13:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.356 ************************************ 00:14:00.356 START TEST raid_superblock_test 00:14:00.356 ************************************ 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:00.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62395 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:00.356 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62395 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62395 ']' 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.357 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.357 [2024-12-06 13:08:06.669670] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:00.357 [2024-12-06 13:08:06.669833] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62395 ] 00:14:00.357 [2024-12-06 13:08:06.849934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.615 [2024-12-06 13:08:07.015312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.874 [2024-12-06 13:08:07.243366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.874 [2024-12-06 13:08:07.243521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.441 malloc1 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.441 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.441 [2024-12-06 13:08:07.807894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:01.441 [2024-12-06 13:08:07.808016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.441 [2024-12-06 13:08:07.808058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:01.441 [2024-12-06 13:08:07.808074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.441 [2024-12-06 13:08:07.811434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.441 [2024-12-06 13:08:07.811674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:01.442 pt1 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 malloc2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 [2024-12-06 13:08:07.869103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.442 [2024-12-06 13:08:07.869348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.442 [2024-12-06 13:08:07.869434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:01.442 [2024-12-06 13:08:07.869692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.442 [2024-12-06 13:08:07.872884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.442 [2024-12-06 13:08:07.873083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.442 pt2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 [2024-12-06 13:08:07.877442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:01.442 [2024-12-06 13:08:07.880168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.442 [2024-12-06 13:08:07.880405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:01.442 [2024-12-06 13:08:07.880427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:01.442 [2024-12-06 13:08:07.880824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:01.442 [2024-12-06 13:08:07.881035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:01.442 [2024-12-06 13:08:07.881065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:01.442 [2024-12-06 13:08:07.881267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.442 "name": "raid_bdev1", 00:14:01.442 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:01.442 "strip_size_kb": 64, 00:14:01.442 "state": "online", 00:14:01.442 "raid_level": "concat", 00:14:01.442 "superblock": true, 00:14:01.442 "num_base_bdevs": 2, 00:14:01.442 "num_base_bdevs_discovered": 2, 00:14:01.442 "num_base_bdevs_operational": 2, 00:14:01.442 "base_bdevs_list": [ 00:14:01.442 { 00:14:01.442 "name": "pt1", 00:14:01.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.442 "is_configured": true, 00:14:01.442 "data_offset": 2048, 00:14:01.442 "data_size": 63488 00:14:01.442 }, 00:14:01.442 { 00:14:01.442 "name": "pt2", 00:14:01.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.442 "is_configured": true, 00:14:01.442 "data_offset": 2048, 00:14:01.442 "data_size": 63488 00:14:01.442 } 00:14:01.442 ] 00:14:01.442 }' 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.442 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.009 [2024-12-06 13:08:08.442022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.009 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.009 "name": "raid_bdev1", 00:14:02.009 "aliases": [ 00:14:02.009 "be4be03f-175f-4053-976d-28c41df41144" 00:14:02.009 ], 00:14:02.009 "product_name": "Raid Volume", 00:14:02.009 "block_size": 512, 00:14:02.009 "num_blocks": 126976, 00:14:02.009 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:02.009 "assigned_rate_limits": { 00:14:02.009 "rw_ios_per_sec": 0, 00:14:02.009 "rw_mbytes_per_sec": 0, 00:14:02.009 "r_mbytes_per_sec": 0, 00:14:02.009 "w_mbytes_per_sec": 0 00:14:02.009 }, 00:14:02.009 "claimed": false, 00:14:02.009 "zoned": false, 00:14:02.009 "supported_io_types": { 00:14:02.009 "read": true, 00:14:02.009 "write": true, 00:14:02.009 "unmap": true, 00:14:02.009 "flush": true, 00:14:02.009 "reset": true, 00:14:02.009 "nvme_admin": false, 00:14:02.009 "nvme_io": false, 00:14:02.009 "nvme_io_md": false, 00:14:02.009 "write_zeroes": true, 00:14:02.009 "zcopy": false, 00:14:02.009 "get_zone_info": false, 00:14:02.009 "zone_management": false, 00:14:02.009 "zone_append": false, 00:14:02.009 "compare": false, 00:14:02.009 "compare_and_write": false, 00:14:02.009 "abort": false, 00:14:02.009 "seek_hole": false, 00:14:02.009 "seek_data": false, 00:14:02.009 "copy": false, 00:14:02.009 "nvme_iov_md": false 00:14:02.009 }, 00:14:02.009 "memory_domains": [ 00:14:02.009 { 00:14:02.009 "dma_device_id": "system", 00:14:02.009 "dma_device_type": 1 00:14:02.009 }, 00:14:02.009 { 00:14:02.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.009 "dma_device_type": 2 00:14:02.009 }, 00:14:02.009 { 00:14:02.009 "dma_device_id": "system", 00:14:02.009 "dma_device_type": 1 00:14:02.009 }, 00:14:02.009 { 00:14:02.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.009 "dma_device_type": 2 00:14:02.009 } 00:14:02.009 ], 00:14:02.009 "driver_specific": { 00:14:02.009 "raid": { 00:14:02.009 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:02.009 "strip_size_kb": 64, 00:14:02.009 "state": "online", 00:14:02.009 "raid_level": "concat", 00:14:02.009 "superblock": true, 00:14:02.009 "num_base_bdevs": 2, 00:14:02.009 "num_base_bdevs_discovered": 2, 00:14:02.009 "num_base_bdevs_operational": 2, 00:14:02.009 "base_bdevs_list": [ 00:14:02.009 { 00:14:02.009 "name": "pt1", 00:14:02.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.009 "is_configured": true, 00:14:02.009 "data_offset": 2048, 00:14:02.010 "data_size": 63488 00:14:02.010 }, 00:14:02.010 { 00:14:02.010 "name": "pt2", 00:14:02.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.010 "is_configured": true, 00:14:02.010 "data_offset": 2048, 00:14:02.010 "data_size": 63488 00:14:02.010 } 00:14:02.010 ] 00:14:02.010 } 00:14:02.010 } 00:14:02.010 }' 00:14:02.010 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.010 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:02.010 pt2' 00:14:02.010 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:02.268 [2024-12-06 13:08:08.686110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=be4be03f-175f-4053-976d-28c41df41144 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z be4be03f-175f-4053-976d-28c41df41144 ']' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.268 [2024-12-06 13:08:08.741716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.268 [2024-12-06 13:08:08.741754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.268 [2024-12-06 13:08:08.741912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.268 [2024-12-06 13:08:08.741986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.268 [2024-12-06 13:08:08.742007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:02.268 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.269 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.527 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.527 [2024-12-06 13:08:08.865819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:02.527 [2024-12-06 13:08:08.868817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:02.527 [2024-12-06 13:08:08.869020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:02.527 [2024-12-06 13:08:08.869244] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:02.528 [2024-12-06 13:08:08.869420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.528 [2024-12-06 13:08:08.869578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:02.528 request: 00:14:02.528 { 00:14:02.528 "name": "raid_bdev1", 00:14:02.528 "raid_level": "concat", 00:14:02.528 "base_bdevs": [ 00:14:02.528 "malloc1", 00:14:02.528 "malloc2" 00:14:02.528 ], 00:14:02.528 "strip_size_kb": 64, 00:14:02.528 "superblock": false, 00:14:02.528 "method": "bdev_raid_create", 00:14:02.528 "req_id": 1 00:14:02.528 } 00:14:02.528 Got JSON-RPC error response 00:14:02.528 response: 00:14:02.528 { 00:14:02.528 "code": -17, 00:14:02.528 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:02.528 } 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.528 [2024-12-06 13:08:08.933928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.528 [2024-12-06 13:08:08.934020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.528 [2024-12-06 13:08:08.934052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.528 [2024-12-06 13:08:08.934071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.528 [2024-12-06 13:08:08.937343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.528 [2024-12-06 13:08:08.937411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.528 [2024-12-06 13:08:08.937562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:02.528 [2024-12-06 13:08:08.937646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.528 pt1 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.528 "name": "raid_bdev1", 00:14:02.528 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:02.528 "strip_size_kb": 64, 00:14:02.528 "state": "configuring", 00:14:02.528 "raid_level": "concat", 00:14:02.528 "superblock": true, 00:14:02.528 "num_base_bdevs": 2, 00:14:02.528 "num_base_bdevs_discovered": 1, 00:14:02.528 "num_base_bdevs_operational": 2, 00:14:02.528 "base_bdevs_list": [ 00:14:02.528 { 00:14:02.528 "name": "pt1", 00:14:02.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.528 "is_configured": true, 00:14:02.528 "data_offset": 2048, 00:14:02.528 "data_size": 63488 00:14:02.528 }, 00:14:02.528 { 00:14:02.528 "name": null, 00:14:02.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.528 "is_configured": false, 00:14:02.528 "data_offset": 2048, 00:14:02.528 "data_size": 63488 00:14:02.528 } 00:14:02.528 ] 00:14:02.528 }' 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.528 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.095 [2024-12-06 13:08:09.454190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.095 [2024-12-06 13:08:09.454304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.095 [2024-12-06 13:08:09.454343] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:03.095 [2024-12-06 13:08:09.454364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.095 [2024-12-06 13:08:09.455042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.095 [2024-12-06 13:08:09.455245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.095 [2024-12-06 13:08:09.455383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:03.095 [2024-12-06 13:08:09.455432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.095 [2024-12-06 13:08:09.455624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:03.095 [2024-12-06 13:08:09.455648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.095 [2024-12-06 13:08:09.456008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:03.095 [2024-12-06 13:08:09.456220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:03.095 [2024-12-06 13:08:09.456236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:03.095 [2024-12-06 13:08:09.456405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.095 pt2 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.095 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.095 "name": "raid_bdev1", 00:14:03.095 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:03.095 "strip_size_kb": 64, 00:14:03.095 "state": "online", 00:14:03.095 "raid_level": "concat", 00:14:03.095 "superblock": true, 00:14:03.095 "num_base_bdevs": 2, 00:14:03.095 "num_base_bdevs_discovered": 2, 00:14:03.095 "num_base_bdevs_operational": 2, 00:14:03.095 "base_bdevs_list": [ 00:14:03.096 { 00:14:03.096 "name": "pt1", 00:14:03.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.096 "is_configured": true, 00:14:03.096 "data_offset": 2048, 00:14:03.096 "data_size": 63488 00:14:03.096 }, 00:14:03.096 { 00:14:03.096 "name": "pt2", 00:14:03.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.096 "is_configured": true, 00:14:03.096 "data_offset": 2048, 00:14:03.096 "data_size": 63488 00:14:03.096 } 00:14:03.096 ] 00:14:03.096 }' 00:14:03.096 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.096 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.662 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.662 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.663 [2024-12-06 13:08:10.010683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.663 "name": "raid_bdev1", 00:14:03.663 "aliases": [ 00:14:03.663 "be4be03f-175f-4053-976d-28c41df41144" 00:14:03.663 ], 00:14:03.663 "product_name": "Raid Volume", 00:14:03.663 "block_size": 512, 00:14:03.663 "num_blocks": 126976, 00:14:03.663 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:03.663 "assigned_rate_limits": { 00:14:03.663 "rw_ios_per_sec": 0, 00:14:03.663 "rw_mbytes_per_sec": 0, 00:14:03.663 "r_mbytes_per_sec": 0, 00:14:03.663 "w_mbytes_per_sec": 0 00:14:03.663 }, 00:14:03.663 "claimed": false, 00:14:03.663 "zoned": false, 00:14:03.663 "supported_io_types": { 00:14:03.663 "read": true, 00:14:03.663 "write": true, 00:14:03.663 "unmap": true, 00:14:03.663 "flush": true, 00:14:03.663 "reset": true, 00:14:03.663 "nvme_admin": false, 00:14:03.663 "nvme_io": false, 00:14:03.663 "nvme_io_md": false, 00:14:03.663 "write_zeroes": true, 00:14:03.663 "zcopy": false, 00:14:03.663 "get_zone_info": false, 00:14:03.663 "zone_management": false, 00:14:03.663 "zone_append": false, 00:14:03.663 "compare": false, 00:14:03.663 "compare_and_write": false, 00:14:03.663 "abort": false, 00:14:03.663 "seek_hole": false, 00:14:03.663 "seek_data": false, 00:14:03.663 "copy": false, 00:14:03.663 "nvme_iov_md": false 00:14:03.663 }, 00:14:03.663 "memory_domains": [ 00:14:03.663 { 00:14:03.663 "dma_device_id": "system", 00:14:03.663 "dma_device_type": 1 00:14:03.663 }, 00:14:03.663 { 00:14:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.663 "dma_device_type": 2 00:14:03.663 }, 00:14:03.663 { 00:14:03.663 "dma_device_id": "system", 00:14:03.663 "dma_device_type": 1 00:14:03.663 }, 00:14:03.663 { 00:14:03.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.663 "dma_device_type": 2 00:14:03.663 } 00:14:03.663 ], 00:14:03.663 "driver_specific": { 00:14:03.663 "raid": { 00:14:03.663 "uuid": "be4be03f-175f-4053-976d-28c41df41144", 00:14:03.663 "strip_size_kb": 64, 00:14:03.663 "state": "online", 00:14:03.663 "raid_level": "concat", 00:14:03.663 "superblock": true, 00:14:03.663 "num_base_bdevs": 2, 00:14:03.663 "num_base_bdevs_discovered": 2, 00:14:03.663 "num_base_bdevs_operational": 2, 00:14:03.663 "base_bdevs_list": [ 00:14:03.663 { 00:14:03.663 "name": "pt1", 00:14:03.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.663 "is_configured": true, 00:14:03.663 "data_offset": 2048, 00:14:03.663 "data_size": 63488 00:14:03.663 }, 00:14:03.663 { 00:14:03.663 "name": "pt2", 00:14:03.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.663 "is_configured": true, 00:14:03.663 "data_offset": 2048, 00:14:03.663 "data_size": 63488 00:14:03.663 } 00:14:03.663 ] 00:14:03.663 } 00:14:03.663 } 00:14:03.663 }' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:03.663 pt2' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.663 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:03.921 [2024-12-06 13:08:10.318937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' be4be03f-175f-4053-976d-28c41df41144 '!=' be4be03f-175f-4053-976d-28c41df41144 ']' 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:03.921 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62395 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62395 ']' 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62395 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62395 00:14:03.922 killing process with pid 62395 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62395' 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62395 00:14:03.922 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62395 00:14:03.922 [2024-12-06 13:08:10.409629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.922 [2024-12-06 13:08:10.409803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.922 [2024-12-06 13:08:10.409910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.922 [2024-12-06 13:08:10.409946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:04.179 [2024-12-06 13:08:10.659579] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.551 13:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:05.551 00:14:05.551 real 0m5.403s 00:14:05.551 user 0m7.804s 00:14:05.551 sys 0m0.783s 00:14:05.551 ************************************ 00:14:05.552 END TEST raid_superblock_test 00:14:05.552 ************************************ 00:14:05.552 13:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.552 13:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.552 13:08:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:14:05.552 13:08:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:05.552 13:08:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.552 13:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.552 ************************************ 00:14:05.552 START TEST raid_read_error_test 00:14:05.552 ************************************ 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:05.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SMjiHho2Vb 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62622 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62622 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62622 ']' 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.552 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.809 [2024-12-06 13:08:12.132397] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:05.809 [2024-12-06 13:08:12.132589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:14:05.809 [2024-12-06 13:08:12.318551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.066 [2024-12-06 13:08:12.522253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.324 [2024-12-06 13:08:12.791594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.324 [2024-12-06 13:08:12.791721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.889 BaseBdev1_malloc 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.889 true 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.889 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 [2024-12-06 13:08:13.225344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:06.890 [2024-12-06 13:08:13.225621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.890 [2024-12-06 13:08:13.225668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:06.890 [2024-12-06 13:08:13.225690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.890 [2024-12-06 13:08:13.228872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.890 BaseBdev1 00:14:06.890 [2024-12-06 13:08:13.229051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 BaseBdev2_malloc 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 true 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 [2024-12-06 13:08:13.294016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:06.890 [2024-12-06 13:08:13.294099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.890 [2024-12-06 13:08:13.294129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:06.890 [2024-12-06 13:08:13.294159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.890 [2024-12-06 13:08:13.297133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.890 [2024-12-06 13:08:13.297185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.890 BaseBdev2 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 [2024-12-06 13:08:13.302192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.890 [2024-12-06 13:08:13.304939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.890 [2024-12-06 13:08:13.305339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:06.890 [2024-12-06 13:08:13.305509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:06.890 [2024-12-06 13:08:13.305864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:06.890 [2024-12-06 13:08:13.306260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:06.890 [2024-12-06 13:08:13.306397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:06.890 [2024-12-06 13:08:13.306795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.890 "name": "raid_bdev1", 00:14:06.890 "uuid": "bd1f1c18-e2b0-49c5-9aee-4c3227e58f18", 00:14:06.890 "strip_size_kb": 64, 00:14:06.890 "state": "online", 00:14:06.890 "raid_level": "concat", 00:14:06.890 "superblock": true, 00:14:06.890 "num_base_bdevs": 2, 00:14:06.890 "num_base_bdevs_discovered": 2, 00:14:06.890 "num_base_bdevs_operational": 2, 00:14:06.890 "base_bdevs_list": [ 00:14:06.890 { 00:14:06.890 "name": "BaseBdev1", 00:14:06.890 "uuid": "a83dd62d-769a-5507-a5c4-58f28f70491b", 00:14:06.890 "is_configured": true, 00:14:06.890 "data_offset": 2048, 00:14:06.890 "data_size": 63488 00:14:06.890 }, 00:14:06.890 { 00:14:06.890 "name": "BaseBdev2", 00:14:06.890 "uuid": "cb7a8bb2-cc21-5de8-9032-971379e811da", 00:14:06.890 "is_configured": true, 00:14:06.890 "data_offset": 2048, 00:14:06.890 "data_size": 63488 00:14:06.890 } 00:14:06.890 ] 00:14:06.890 }' 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.890 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.456 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:07.456 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:07.456 [2024-12-06 13:08:13.884510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.389 "name": "raid_bdev1", 00:14:08.389 "uuid": "bd1f1c18-e2b0-49c5-9aee-4c3227e58f18", 00:14:08.389 "strip_size_kb": 64, 00:14:08.389 "state": "online", 00:14:08.389 "raid_level": "concat", 00:14:08.389 "superblock": true, 00:14:08.389 "num_base_bdevs": 2, 00:14:08.389 "num_base_bdevs_discovered": 2, 00:14:08.389 "num_base_bdevs_operational": 2, 00:14:08.389 "base_bdevs_list": [ 00:14:08.389 { 00:14:08.389 "name": "BaseBdev1", 00:14:08.389 "uuid": "a83dd62d-769a-5507-a5c4-58f28f70491b", 00:14:08.389 "is_configured": true, 00:14:08.389 "data_offset": 2048, 00:14:08.389 "data_size": 63488 00:14:08.389 }, 00:14:08.389 { 00:14:08.389 "name": "BaseBdev2", 00:14:08.389 "uuid": "cb7a8bb2-cc21-5de8-9032-971379e811da", 00:14:08.389 "is_configured": true, 00:14:08.389 "data_offset": 2048, 00:14:08.389 "data_size": 63488 00:14:08.389 } 00:14:08.389 ] 00:14:08.389 }' 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.389 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.954 [2024-12-06 13:08:15.349038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.954 [2024-12-06 13:08:15.349127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.954 { 00:14:08.954 "results": [ 00:14:08.954 { 00:14:08.954 "job": "raid_bdev1", 00:14:08.954 "core_mask": "0x1", 00:14:08.954 "workload": "randrw", 00:14:08.954 "percentage": 50, 00:14:08.954 "status": "finished", 00:14:08.954 "queue_depth": 1, 00:14:08.954 "io_size": 131072, 00:14:08.954 "runtime": 1.461883, 00:14:08.954 "iops": 8958.993298369294, 00:14:08.954 "mibps": 1119.8741622961618, 00:14:08.954 "io_failed": 1, 00:14:08.954 "io_timeout": 0, 00:14:08.954 "avg_latency_us": 156.6515204264357, 00:14:08.954 "min_latency_us": 40.49454545454545, 00:14:08.954 "max_latency_us": 1980.9745454545455 00:14:08.954 } 00:14:08.954 ], 00:14:08.954 "core_count": 1 00:14:08.954 } 00:14:08.954 [2024-12-06 13:08:15.353086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.954 [2024-12-06 13:08:15.353210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.954 [2024-12-06 13:08:15.353275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.954 [2024-12-06 13:08:15.353314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62622 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62622 ']' 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62622 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62622 00:14:08.954 killing process with pid 62622 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62622' 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62622 00:14:08.954 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62622 00:14:08.954 [2024-12-06 13:08:15.394473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.211 [2024-12-06 13:08:15.533584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SMjiHho2Vb 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:14:10.584 00:14:10.584 real 0m4.773s 00:14:10.584 user 0m5.891s 00:14:10.584 sys 0m0.648s 00:14:10.584 ************************************ 00:14:10.584 END TEST raid_read_error_test 00:14:10.584 ************************************ 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.584 13:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.584 13:08:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:14:10.584 13:08:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:10.584 13:08:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.584 13:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.584 ************************************ 00:14:10.584 START TEST raid_write_error_test 00:14:10.584 ************************************ 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Mjg4SVOySQ 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62763 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62763 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62763 ']' 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.584 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.584 [2024-12-06 13:08:16.964153] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:10.584 [2024-12-06 13:08:16.964561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62763 ] 00:14:10.842 [2024-12-06 13:08:17.152112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.842 [2024-12-06 13:08:17.308374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.099 [2024-12-06 13:08:17.538666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.099 [2024-12-06 13:08:17.539049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.662 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.662 BaseBdev1_malloc 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.662 true 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.662 [2024-12-06 13:08:18.028174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:11.662 [2024-12-06 13:08:18.028427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.662 [2024-12-06 13:08:18.028482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:11.662 [2024-12-06 13:08:18.028503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.662 [2024-12-06 13:08:18.031643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.662 [2024-12-06 13:08:18.031695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.662 BaseBdev1 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.662 BaseBdev2_malloc 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.662 true 00:14:11.662 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 [2024-12-06 13:08:18.090103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:11.663 [2024-12-06 13:08:18.090395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.663 [2024-12-06 13:08:18.090539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:11.663 [2024-12-06 13:08:18.090657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.663 [2024-12-06 13:08:18.093756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.663 [2024-12-06 13:08:18.093807] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.663 BaseBdev2 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 [2024-12-06 13:08:18.098234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.663 [2024-12-06 13:08:18.101226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.663 [2024-12-06 13:08:18.101676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:11.663 [2024-12-06 13:08:18.101829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:11.663 [2024-12-06 13:08:18.102179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:11.663 [2024-12-06 13:08:18.102437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:11.663 [2024-12-06 13:08:18.102495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:11.663 [2024-12-06 13:08:18.102794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.663 "name": "raid_bdev1", 00:14:11.663 "uuid": "a8a5f5ab-3d59-40cd-973f-869a7dd82be5", 00:14:11.663 "strip_size_kb": 64, 00:14:11.663 "state": "online", 00:14:11.663 "raid_level": "concat", 00:14:11.663 "superblock": true, 00:14:11.663 "num_base_bdevs": 2, 00:14:11.663 "num_base_bdevs_discovered": 2, 00:14:11.663 "num_base_bdevs_operational": 2, 00:14:11.663 "base_bdevs_list": [ 00:14:11.663 { 00:14:11.663 "name": "BaseBdev1", 00:14:11.663 "uuid": "cab77b0f-6e5f-5a6e-a950-4e87ef5d3dec", 00:14:11.663 "is_configured": true, 00:14:11.663 "data_offset": 2048, 00:14:11.663 "data_size": 63488 00:14:11.663 }, 00:14:11.663 { 00:14:11.663 "name": "BaseBdev2", 00:14:11.663 "uuid": "26f48140-ab7f-5b66-a5c5-d2ccaebec873", 00:14:11.663 "is_configured": true, 00:14:11.663 "data_offset": 2048, 00:14:11.663 "data_size": 63488 00:14:11.663 } 00:14:11.663 ] 00:14:11.663 }' 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.663 13:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.228 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:12.228 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.487 [2024-12-06 13:08:18.780678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.423 "name": "raid_bdev1", 00:14:13.423 "uuid": "a8a5f5ab-3d59-40cd-973f-869a7dd82be5", 00:14:13.423 "strip_size_kb": 64, 00:14:13.423 "state": "online", 00:14:13.423 "raid_level": "concat", 00:14:13.423 "superblock": true, 00:14:13.423 "num_base_bdevs": 2, 00:14:13.423 "num_base_bdevs_discovered": 2, 00:14:13.423 "num_base_bdevs_operational": 2, 00:14:13.423 "base_bdevs_list": [ 00:14:13.423 { 00:14:13.423 "name": "BaseBdev1", 00:14:13.423 "uuid": "cab77b0f-6e5f-5a6e-a950-4e87ef5d3dec", 00:14:13.423 "is_configured": true, 00:14:13.423 "data_offset": 2048, 00:14:13.423 "data_size": 63488 00:14:13.423 }, 00:14:13.423 { 00:14:13.423 "name": "BaseBdev2", 00:14:13.423 "uuid": "26f48140-ab7f-5b66-a5c5-d2ccaebec873", 00:14:13.423 "is_configured": true, 00:14:13.423 "data_offset": 2048, 00:14:13.423 "data_size": 63488 00:14:13.423 } 00:14:13.423 ] 00:14:13.423 }' 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.423 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.682 [2024-12-06 13:08:20.195705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.682 [2024-12-06 13:08:20.195754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.682 [2024-12-06 13:08:20.199415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.682 [2024-12-06 13:08:20.199499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.682 [2024-12-06 13:08:20.199719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.682 [2024-12-06 13:08:20.199810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:13.682 { 00:14:13.682 "results": [ 00:14:13.682 { 00:14:13.682 "job": "raid_bdev1", 00:14:13.682 "core_mask": "0x1", 00:14:13.682 "workload": "randrw", 00:14:13.682 "percentage": 50, 00:14:13.682 "status": "finished", 00:14:13.682 "queue_depth": 1, 00:14:13.682 "io_size": 131072, 00:14:13.682 "runtime": 1.412295, 00:14:13.682 "iops": 9378.352256433676, 00:14:13.682 "mibps": 1172.2940320542095, 00:14:13.682 "io_failed": 1, 00:14:13.682 "io_timeout": 0, 00:14:13.682 "avg_latency_us": 149.6635691049099, 00:14:13.682 "min_latency_us": 40.49454545454545, 00:14:13.682 "max_latency_us": 1906.5018181818182 00:14:13.682 } 00:14:13.682 ], 00:14:13.682 "core_count": 1 00:14:13.682 } 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62763 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62763 ']' 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62763 00:14:13.682 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62763 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62763' 00:14:13.940 killing process with pid 62763 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62763 00:14:13.940 [2024-12-06 13:08:20.242038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.940 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62763 00:14:13.940 [2024-12-06 13:08:20.386984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Mjg4SVOySQ 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:15.316 00:14:15.316 real 0m4.787s 00:14:15.316 user 0m5.924s 00:14:15.316 sys 0m0.659s 00:14:15.316 ************************************ 00:14:15.316 END TEST raid_write_error_test 00:14:15.316 ************************************ 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.316 13:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 13:08:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:15.316 13:08:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:15.316 13:08:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:15.316 13:08:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.316 13:08:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 ************************************ 00:14:15.316 START TEST raid_state_function_test 00:14:15.316 ************************************ 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:15.316 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:15.317 Process raid pid: 62912 00:14:15.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62912 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62912' 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62912 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62912 ']' 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.317 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.317 [2024-12-06 13:08:21.800914] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:15.317 [2024-12-06 13:08:21.801425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.576 [2024-12-06 13:08:21.994140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.835 [2024-12-06 13:08:22.154434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.094 [2024-12-06 13:08:22.398620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.094 [2024-12-06 13:08:22.398921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.352 [2024-12-06 13:08:22.796664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.352 [2024-12-06 13:08:22.796806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.352 [2024-12-06 13:08:22.796823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.352 [2024-12-06 13:08:22.796840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.352 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.353 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.353 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.353 "name": "Existed_Raid", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.353 "strip_size_kb": 0, 00:14:16.353 "state": "configuring", 00:14:16.353 "raid_level": "raid1", 00:14:16.353 "superblock": false, 00:14:16.353 "num_base_bdevs": 2, 00:14:16.353 "num_base_bdevs_discovered": 0, 00:14:16.353 "num_base_bdevs_operational": 2, 00:14:16.353 "base_bdevs_list": [ 00:14:16.353 { 00:14:16.353 "name": "BaseBdev1", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.353 "is_configured": false, 00:14:16.353 "data_offset": 0, 00:14:16.353 "data_size": 0 00:14:16.353 }, 00:14:16.353 { 00:14:16.353 "name": "BaseBdev2", 00:14:16.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.353 "is_configured": false, 00:14:16.353 "data_offset": 0, 00:14:16.353 "data_size": 0 00:14:16.353 } 00:14:16.353 ] 00:14:16.353 }' 00:14:16.353 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.353 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 [2024-12-06 13:08:23.320959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.920 [2024-12-06 13:08:23.321039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 [2024-12-06 13:08:23.328905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.920 [2024-12-06 13:08:23.329181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.920 [2024-12-06 13:08:23.329357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.920 [2024-12-06 13:08:23.329576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 [2024-12-06 13:08:23.396534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.920 BaseBdev1 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 [ 00:14:16.920 { 00:14:16.920 "name": "BaseBdev1", 00:14:16.920 "aliases": [ 00:14:16.920 "4dea360f-f68f-4644-945c-7cb7ef0ba3f9" 00:14:16.920 ], 00:14:16.920 "product_name": "Malloc disk", 00:14:16.920 "block_size": 512, 00:14:16.920 "num_blocks": 65536, 00:14:16.920 "uuid": "4dea360f-f68f-4644-945c-7cb7ef0ba3f9", 00:14:16.920 "assigned_rate_limits": { 00:14:16.920 "rw_ios_per_sec": 0, 00:14:16.920 "rw_mbytes_per_sec": 0, 00:14:16.920 "r_mbytes_per_sec": 0, 00:14:16.920 "w_mbytes_per_sec": 0 00:14:16.920 }, 00:14:16.920 "claimed": true, 00:14:16.920 "claim_type": "exclusive_write", 00:14:16.920 "zoned": false, 00:14:16.920 "supported_io_types": { 00:14:16.920 "read": true, 00:14:16.920 "write": true, 00:14:16.920 "unmap": true, 00:14:16.920 "flush": true, 00:14:16.920 "reset": true, 00:14:16.920 "nvme_admin": false, 00:14:16.920 "nvme_io": false, 00:14:16.920 "nvme_io_md": false, 00:14:16.920 "write_zeroes": true, 00:14:16.920 "zcopy": true, 00:14:16.920 "get_zone_info": false, 00:14:16.920 "zone_management": false, 00:14:16.920 "zone_append": false, 00:14:16.920 "compare": false, 00:14:16.920 "compare_and_write": false, 00:14:16.920 "abort": true, 00:14:16.920 "seek_hole": false, 00:14:16.920 "seek_data": false, 00:14:16.920 "copy": true, 00:14:16.920 "nvme_iov_md": false 00:14:16.920 }, 00:14:16.920 "memory_domains": [ 00:14:16.920 { 00:14:16.920 "dma_device_id": "system", 00:14:16.920 "dma_device_type": 1 00:14:16.920 }, 00:14:16.920 { 00:14:16.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.920 "dma_device_type": 2 00:14:16.920 } 00:14:16.920 ], 00:14:16.920 "driver_specific": {} 00:14:16.920 } 00:14:16.920 ] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.920 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.178 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.178 "name": "Existed_Raid", 00:14:17.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.178 "strip_size_kb": 0, 00:14:17.178 "state": "configuring", 00:14:17.178 "raid_level": "raid1", 00:14:17.178 "superblock": false, 00:14:17.178 "num_base_bdevs": 2, 00:14:17.178 "num_base_bdevs_discovered": 1, 00:14:17.178 "num_base_bdevs_operational": 2, 00:14:17.178 "base_bdevs_list": [ 00:14:17.178 { 00:14:17.178 "name": "BaseBdev1", 00:14:17.178 "uuid": "4dea360f-f68f-4644-945c-7cb7ef0ba3f9", 00:14:17.178 "is_configured": true, 00:14:17.178 "data_offset": 0, 00:14:17.178 "data_size": 65536 00:14:17.178 }, 00:14:17.178 { 00:14:17.178 "name": "BaseBdev2", 00:14:17.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.178 "is_configured": false, 00:14:17.178 "data_offset": 0, 00:14:17.178 "data_size": 0 00:14:17.178 } 00:14:17.178 ] 00:14:17.178 }' 00:14:17.178 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.178 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 [2024-12-06 13:08:23.972835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.745 [2024-12-06 13:08:23.972907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 [2024-12-06 13:08:23.980872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.745 [2024-12-06 13:08:23.983750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.745 [2024-12-06 13:08:23.983948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.745 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.745 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.745 "name": "Existed_Raid", 00:14:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.745 "strip_size_kb": 0, 00:14:17.745 "state": "configuring", 00:14:17.745 "raid_level": "raid1", 00:14:17.745 "superblock": false, 00:14:17.745 "num_base_bdevs": 2, 00:14:17.745 "num_base_bdevs_discovered": 1, 00:14:17.745 "num_base_bdevs_operational": 2, 00:14:17.745 "base_bdevs_list": [ 00:14:17.745 { 00:14:17.745 "name": "BaseBdev1", 00:14:17.745 "uuid": "4dea360f-f68f-4644-945c-7cb7ef0ba3f9", 00:14:17.745 "is_configured": true, 00:14:17.745 "data_offset": 0, 00:14:17.745 "data_size": 65536 00:14:17.745 }, 00:14:17.745 { 00:14:17.745 "name": "BaseBdev2", 00:14:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.745 "is_configured": false, 00:14:17.745 "data_offset": 0, 00:14:17.745 "data_size": 0 00:14:17.745 } 00:14:17.745 ] 00:14:17.745 }' 00:14:17.745 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.745 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.003 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.003 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.003 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.261 [2024-12-06 13:08:24.551340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.261 BaseBdev2 00:14:18.261 [2024-12-06 13:08:24.551711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:18.261 [2024-12-06 13:08:24.551737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:18.261 [2024-12-06 13:08:24.552108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:18.261 [2024-12-06 13:08:24.552367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:18.261 [2024-12-06 13:08:24.552389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:18.261 [2024-12-06 13:08:24.552711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.261 [ 00:14:18.261 { 00:14:18.261 "name": "BaseBdev2", 00:14:18.261 "aliases": [ 00:14:18.261 "7213704e-e79a-4e80-b342-840b6d3d1302" 00:14:18.261 ], 00:14:18.261 "product_name": "Malloc disk", 00:14:18.261 "block_size": 512, 00:14:18.261 "num_blocks": 65536, 00:14:18.261 "uuid": "7213704e-e79a-4e80-b342-840b6d3d1302", 00:14:18.261 "assigned_rate_limits": { 00:14:18.261 "rw_ios_per_sec": 0, 00:14:18.261 "rw_mbytes_per_sec": 0, 00:14:18.261 "r_mbytes_per_sec": 0, 00:14:18.261 "w_mbytes_per_sec": 0 00:14:18.261 }, 00:14:18.261 "claimed": true, 00:14:18.261 "claim_type": "exclusive_write", 00:14:18.261 "zoned": false, 00:14:18.261 "supported_io_types": { 00:14:18.261 "read": true, 00:14:18.261 "write": true, 00:14:18.261 "unmap": true, 00:14:18.261 "flush": true, 00:14:18.261 "reset": true, 00:14:18.261 "nvme_admin": false, 00:14:18.261 "nvme_io": false, 00:14:18.261 "nvme_io_md": false, 00:14:18.261 "write_zeroes": true, 00:14:18.261 "zcopy": true, 00:14:18.261 "get_zone_info": false, 00:14:18.261 "zone_management": false, 00:14:18.261 "zone_append": false, 00:14:18.261 "compare": false, 00:14:18.261 "compare_and_write": false, 00:14:18.261 "abort": true, 00:14:18.261 "seek_hole": false, 00:14:18.261 "seek_data": false, 00:14:18.261 "copy": true, 00:14:18.261 "nvme_iov_md": false 00:14:18.261 }, 00:14:18.261 "memory_domains": [ 00:14:18.261 { 00:14:18.261 "dma_device_id": "system", 00:14:18.261 "dma_device_type": 1 00:14:18.261 }, 00:14:18.261 { 00:14:18.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.261 "dma_device_type": 2 00:14:18.261 } 00:14:18.261 ], 00:14:18.261 "driver_specific": {} 00:14:18.261 } 00:14:18.261 ] 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.261 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.262 "name": "Existed_Raid", 00:14:18.262 "uuid": "d326fe84-7d30-4ae8-a18f-de947f728497", 00:14:18.262 "strip_size_kb": 0, 00:14:18.262 "state": "online", 00:14:18.262 "raid_level": "raid1", 00:14:18.262 "superblock": false, 00:14:18.262 "num_base_bdevs": 2, 00:14:18.262 "num_base_bdevs_discovered": 2, 00:14:18.262 "num_base_bdevs_operational": 2, 00:14:18.262 "base_bdevs_list": [ 00:14:18.262 { 00:14:18.262 "name": "BaseBdev1", 00:14:18.262 "uuid": "4dea360f-f68f-4644-945c-7cb7ef0ba3f9", 00:14:18.262 "is_configured": true, 00:14:18.262 "data_offset": 0, 00:14:18.262 "data_size": 65536 00:14:18.262 }, 00:14:18.262 { 00:14:18.262 "name": "BaseBdev2", 00:14:18.262 "uuid": "7213704e-e79a-4e80-b342-840b6d3d1302", 00:14:18.262 "is_configured": true, 00:14:18.262 "data_offset": 0, 00:14:18.262 "data_size": 65536 00:14:18.262 } 00:14:18.262 ] 00:14:18.262 }' 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.262 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.827 [2024-12-06 13:08:25.148000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.827 "name": "Existed_Raid", 00:14:18.827 "aliases": [ 00:14:18.827 "d326fe84-7d30-4ae8-a18f-de947f728497" 00:14:18.827 ], 00:14:18.827 "product_name": "Raid Volume", 00:14:18.827 "block_size": 512, 00:14:18.827 "num_blocks": 65536, 00:14:18.827 "uuid": "d326fe84-7d30-4ae8-a18f-de947f728497", 00:14:18.827 "assigned_rate_limits": { 00:14:18.827 "rw_ios_per_sec": 0, 00:14:18.827 "rw_mbytes_per_sec": 0, 00:14:18.827 "r_mbytes_per_sec": 0, 00:14:18.827 "w_mbytes_per_sec": 0 00:14:18.827 }, 00:14:18.827 "claimed": false, 00:14:18.827 "zoned": false, 00:14:18.827 "supported_io_types": { 00:14:18.827 "read": true, 00:14:18.827 "write": true, 00:14:18.827 "unmap": false, 00:14:18.827 "flush": false, 00:14:18.827 "reset": true, 00:14:18.827 "nvme_admin": false, 00:14:18.827 "nvme_io": false, 00:14:18.827 "nvme_io_md": false, 00:14:18.827 "write_zeroes": true, 00:14:18.827 "zcopy": false, 00:14:18.827 "get_zone_info": false, 00:14:18.827 "zone_management": false, 00:14:18.827 "zone_append": false, 00:14:18.827 "compare": false, 00:14:18.827 "compare_and_write": false, 00:14:18.827 "abort": false, 00:14:18.827 "seek_hole": false, 00:14:18.827 "seek_data": false, 00:14:18.827 "copy": false, 00:14:18.827 "nvme_iov_md": false 00:14:18.827 }, 00:14:18.827 "memory_domains": [ 00:14:18.827 { 00:14:18.827 "dma_device_id": "system", 00:14:18.827 "dma_device_type": 1 00:14:18.827 }, 00:14:18.827 { 00:14:18.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.827 "dma_device_type": 2 00:14:18.827 }, 00:14:18.827 { 00:14:18.827 "dma_device_id": "system", 00:14:18.827 "dma_device_type": 1 00:14:18.827 }, 00:14:18.827 { 00:14:18.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.827 "dma_device_type": 2 00:14:18.827 } 00:14:18.827 ], 00:14:18.827 "driver_specific": { 00:14:18.827 "raid": { 00:14:18.827 "uuid": "d326fe84-7d30-4ae8-a18f-de947f728497", 00:14:18.827 "strip_size_kb": 0, 00:14:18.827 "state": "online", 00:14:18.827 "raid_level": "raid1", 00:14:18.827 "superblock": false, 00:14:18.827 "num_base_bdevs": 2, 00:14:18.827 "num_base_bdevs_discovered": 2, 00:14:18.827 "num_base_bdevs_operational": 2, 00:14:18.827 "base_bdevs_list": [ 00:14:18.827 { 00:14:18.827 "name": "BaseBdev1", 00:14:18.827 "uuid": "4dea360f-f68f-4644-945c-7cb7ef0ba3f9", 00:14:18.827 "is_configured": true, 00:14:18.827 "data_offset": 0, 00:14:18.827 "data_size": 65536 00:14:18.827 }, 00:14:18.827 { 00:14:18.827 "name": "BaseBdev2", 00:14:18.827 "uuid": "7213704e-e79a-4e80-b342-840b6d3d1302", 00:14:18.827 "is_configured": true, 00:14:18.827 "data_offset": 0, 00:14:18.827 "data_size": 65536 00:14:18.827 } 00:14:18.827 ] 00:14:18.827 } 00:14:18.827 } 00:14:18.827 }' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:18.827 BaseBdev2' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.827 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.086 [2024-12-06 13:08:25.411696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.086 "name": "Existed_Raid", 00:14:19.086 "uuid": "d326fe84-7d30-4ae8-a18f-de947f728497", 00:14:19.086 "strip_size_kb": 0, 00:14:19.086 "state": "online", 00:14:19.086 "raid_level": "raid1", 00:14:19.086 "superblock": false, 00:14:19.086 "num_base_bdevs": 2, 00:14:19.086 "num_base_bdevs_discovered": 1, 00:14:19.086 "num_base_bdevs_operational": 1, 00:14:19.086 "base_bdevs_list": [ 00:14:19.086 { 00:14:19.086 "name": null, 00:14:19.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.086 "is_configured": false, 00:14:19.086 "data_offset": 0, 00:14:19.086 "data_size": 65536 00:14:19.086 }, 00:14:19.086 { 00:14:19.086 "name": "BaseBdev2", 00:14:19.086 "uuid": "7213704e-e79a-4e80-b342-840b6d3d1302", 00:14:19.086 "is_configured": true, 00:14:19.086 "data_offset": 0, 00:14:19.086 "data_size": 65536 00:14:19.086 } 00:14:19.086 ] 00:14:19.086 }' 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.086 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.651 [2024-12-06 13:08:26.068088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.651 [2024-12-06 13:08:26.068573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.651 [2024-12-06 13:08:26.152608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.651 [2024-12-06 13:08:26.152697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.651 [2024-12-06 13:08:26.152720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.651 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62912 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62912 ']' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62912 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62912 00:14:19.909 killing process with pid 62912 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62912' 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62912 00:14:19.909 [2024-12-06 13:08:26.253641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.909 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62912 00:14:19.909 [2024-12-06 13:08:26.269071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:21.284 00:14:21.284 real 0m5.723s 00:14:21.284 user 0m8.489s 00:14:21.284 sys 0m0.923s 00:14:21.284 ************************************ 00:14:21.284 END TEST raid_state_function_test 00:14:21.284 ************************************ 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.284 13:08:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:21.284 13:08:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:21.284 13:08:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.284 13:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.284 ************************************ 00:14:21.284 START TEST raid_state_function_test_sb 00:14:21.284 ************************************ 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:21.284 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63171 00:14:21.285 Process raid pid: 63171 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63171' 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63171 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63171 ']' 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.285 13:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.285 [2024-12-06 13:08:27.590717] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:21.285 [2024-12-06 13:08:27.590931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.285 [2024-12-06 13:08:27.773129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.543 [2024-12-06 13:08:27.918294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.802 [2024-12-06 13:08:28.147536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.802 [2024-12-06 13:08:28.147608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.368 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.369 [2024-12-06 13:08:28.600777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.369 [2024-12-06 13:08:28.600877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.369 [2024-12-06 13:08:28.600894] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.369 [2024-12-06 13:08:28.600910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.369 "name": "Existed_Raid", 00:14:22.369 "uuid": "9f367b1c-451b-4a65-990e-46d14123a7b5", 00:14:22.369 "strip_size_kb": 0, 00:14:22.369 "state": "configuring", 00:14:22.369 "raid_level": "raid1", 00:14:22.369 "superblock": true, 00:14:22.369 "num_base_bdevs": 2, 00:14:22.369 "num_base_bdevs_discovered": 0, 00:14:22.369 "num_base_bdevs_operational": 2, 00:14:22.369 "base_bdevs_list": [ 00:14:22.369 { 00:14:22.369 "name": "BaseBdev1", 00:14:22.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.369 "is_configured": false, 00:14:22.369 "data_offset": 0, 00:14:22.369 "data_size": 0 00:14:22.369 }, 00:14:22.369 { 00:14:22.369 "name": "BaseBdev2", 00:14:22.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.369 "is_configured": false, 00:14:22.369 "data_offset": 0, 00:14:22.369 "data_size": 0 00:14:22.369 } 00:14:22.369 ] 00:14:22.369 }' 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.369 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.627 [2024-12-06 13:08:29.104879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.627 [2024-12-06 13:08:29.104929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.627 [2024-12-06 13:08:29.112879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.627 [2024-12-06 13:08:29.112942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.627 [2024-12-06 13:08:29.112957] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.627 [2024-12-06 13:08:29.112977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 [2024-12-06 13:08:29.162630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.886 BaseBdev1 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 [ 00:14:22.886 { 00:14:22.886 "name": "BaseBdev1", 00:14:22.886 "aliases": [ 00:14:22.886 "ef4450e5-a960-4c66-8324-afe225212569" 00:14:22.886 ], 00:14:22.886 "product_name": "Malloc disk", 00:14:22.886 "block_size": 512, 00:14:22.886 "num_blocks": 65536, 00:14:22.886 "uuid": "ef4450e5-a960-4c66-8324-afe225212569", 00:14:22.886 "assigned_rate_limits": { 00:14:22.886 "rw_ios_per_sec": 0, 00:14:22.886 "rw_mbytes_per_sec": 0, 00:14:22.886 "r_mbytes_per_sec": 0, 00:14:22.886 "w_mbytes_per_sec": 0 00:14:22.886 }, 00:14:22.886 "claimed": true, 00:14:22.886 "claim_type": "exclusive_write", 00:14:22.886 "zoned": false, 00:14:22.886 "supported_io_types": { 00:14:22.886 "read": true, 00:14:22.886 "write": true, 00:14:22.886 "unmap": true, 00:14:22.886 "flush": true, 00:14:22.886 "reset": true, 00:14:22.886 "nvme_admin": false, 00:14:22.886 "nvme_io": false, 00:14:22.886 "nvme_io_md": false, 00:14:22.886 "write_zeroes": true, 00:14:22.886 "zcopy": true, 00:14:22.886 "get_zone_info": false, 00:14:22.886 "zone_management": false, 00:14:22.886 "zone_append": false, 00:14:22.886 "compare": false, 00:14:22.886 "compare_and_write": false, 00:14:22.886 "abort": true, 00:14:22.886 "seek_hole": false, 00:14:22.886 "seek_data": false, 00:14:22.886 "copy": true, 00:14:22.886 "nvme_iov_md": false 00:14:22.886 }, 00:14:22.886 "memory_domains": [ 00:14:22.886 { 00:14:22.886 "dma_device_id": "system", 00:14:22.886 "dma_device_type": 1 00:14:22.886 }, 00:14:22.886 { 00:14:22.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.886 "dma_device_type": 2 00:14:22.886 } 00:14:22.886 ], 00:14:22.886 "driver_specific": {} 00:14:22.886 } 00:14:22.886 ] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.886 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.886 "name": "Existed_Raid", 00:14:22.886 "uuid": "5408508e-9718-46b1-bc64-771188c50ca1", 00:14:22.886 "strip_size_kb": 0, 00:14:22.886 "state": "configuring", 00:14:22.886 "raid_level": "raid1", 00:14:22.886 "superblock": true, 00:14:22.886 "num_base_bdevs": 2, 00:14:22.886 "num_base_bdevs_discovered": 1, 00:14:22.886 "num_base_bdevs_operational": 2, 00:14:22.886 "base_bdevs_list": [ 00:14:22.886 { 00:14:22.886 "name": "BaseBdev1", 00:14:22.886 "uuid": "ef4450e5-a960-4c66-8324-afe225212569", 00:14:22.886 "is_configured": true, 00:14:22.886 "data_offset": 2048, 00:14:22.887 "data_size": 63488 00:14:22.887 }, 00:14:22.887 { 00:14:22.887 "name": "BaseBdev2", 00:14:22.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.887 "is_configured": false, 00:14:22.887 "data_offset": 0, 00:14:22.887 "data_size": 0 00:14:22.887 } 00:14:22.887 ] 00:14:22.887 }' 00:14:22.887 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.887 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.453 [2024-12-06 13:08:29.690806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.453 [2024-12-06 13:08:29.690884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.453 [2024-12-06 13:08:29.702957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.453 [2024-12-06 13:08:29.705694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.453 [2024-12-06 13:08:29.705764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.453 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.453 "name": "Existed_Raid", 00:14:23.453 "uuid": "9aa4fd79-c08d-4f19-bb32-326903031381", 00:14:23.453 "strip_size_kb": 0, 00:14:23.453 "state": "configuring", 00:14:23.453 "raid_level": "raid1", 00:14:23.453 "superblock": true, 00:14:23.453 "num_base_bdevs": 2, 00:14:23.453 "num_base_bdevs_discovered": 1, 00:14:23.453 "num_base_bdevs_operational": 2, 00:14:23.453 "base_bdevs_list": [ 00:14:23.453 { 00:14:23.453 "name": "BaseBdev1", 00:14:23.453 "uuid": "ef4450e5-a960-4c66-8324-afe225212569", 00:14:23.453 "is_configured": true, 00:14:23.453 "data_offset": 2048, 00:14:23.453 "data_size": 63488 00:14:23.453 }, 00:14:23.453 { 00:14:23.453 "name": "BaseBdev2", 00:14:23.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.454 "is_configured": false, 00:14:23.454 "data_offset": 0, 00:14:23.454 "data_size": 0 00:14:23.454 } 00:14:23.454 ] 00:14:23.454 }' 00:14:23.454 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.454 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.712 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.712 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.712 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 [2024-12-06 13:08:30.273199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.970 [2024-12-06 13:08:30.273627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:23.970 [2024-12-06 13:08:30.273651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:23.970 [2024-12-06 13:08:30.274039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:23.970 BaseBdev2 00:14:23.970 [2024-12-06 13:08:30.274284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:23.970 [2024-12-06 13:08:30.274309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:23.970 [2024-12-06 13:08:30.274512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 [ 00:14:23.970 { 00:14:23.970 "name": "BaseBdev2", 00:14:23.970 "aliases": [ 00:14:23.970 "8ec83fb9-d546-41ae-a10a-7bce69e54f59" 00:14:23.970 ], 00:14:23.970 "product_name": "Malloc disk", 00:14:23.970 "block_size": 512, 00:14:23.970 "num_blocks": 65536, 00:14:23.970 "uuid": "8ec83fb9-d546-41ae-a10a-7bce69e54f59", 00:14:23.970 "assigned_rate_limits": { 00:14:23.970 "rw_ios_per_sec": 0, 00:14:23.970 "rw_mbytes_per_sec": 0, 00:14:23.970 "r_mbytes_per_sec": 0, 00:14:23.970 "w_mbytes_per_sec": 0 00:14:23.970 }, 00:14:23.970 "claimed": true, 00:14:23.970 "claim_type": "exclusive_write", 00:14:23.970 "zoned": false, 00:14:23.970 "supported_io_types": { 00:14:23.970 "read": true, 00:14:23.970 "write": true, 00:14:23.970 "unmap": true, 00:14:23.970 "flush": true, 00:14:23.970 "reset": true, 00:14:23.970 "nvme_admin": false, 00:14:23.970 "nvme_io": false, 00:14:23.970 "nvme_io_md": false, 00:14:23.970 "write_zeroes": true, 00:14:23.970 "zcopy": true, 00:14:23.970 "get_zone_info": false, 00:14:23.970 "zone_management": false, 00:14:23.970 "zone_append": false, 00:14:23.970 "compare": false, 00:14:23.970 "compare_and_write": false, 00:14:23.970 "abort": true, 00:14:23.970 "seek_hole": false, 00:14:23.970 "seek_data": false, 00:14:23.970 "copy": true, 00:14:23.970 "nvme_iov_md": false 00:14:23.970 }, 00:14:23.970 "memory_domains": [ 00:14:23.970 { 00:14:23.970 "dma_device_id": "system", 00:14:23.970 "dma_device_type": 1 00:14:23.970 }, 00:14:23.970 { 00:14:23.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.970 "dma_device_type": 2 00:14:23.970 } 00:14:23.970 ], 00:14:23.970 "driver_specific": {} 00:14:23.970 } 00:14:23.970 ] 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.970 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.971 "name": "Existed_Raid", 00:14:23.971 "uuid": "9aa4fd79-c08d-4f19-bb32-326903031381", 00:14:23.971 "strip_size_kb": 0, 00:14:23.971 "state": "online", 00:14:23.971 "raid_level": "raid1", 00:14:23.971 "superblock": true, 00:14:23.971 "num_base_bdevs": 2, 00:14:23.971 "num_base_bdevs_discovered": 2, 00:14:23.971 "num_base_bdevs_operational": 2, 00:14:23.971 "base_bdevs_list": [ 00:14:23.971 { 00:14:23.971 "name": "BaseBdev1", 00:14:23.971 "uuid": "ef4450e5-a960-4c66-8324-afe225212569", 00:14:23.971 "is_configured": true, 00:14:23.971 "data_offset": 2048, 00:14:23.971 "data_size": 63488 00:14:23.971 }, 00:14:23.971 { 00:14:23.971 "name": "BaseBdev2", 00:14:23.971 "uuid": "8ec83fb9-d546-41ae-a10a-7bce69e54f59", 00:14:23.971 "is_configured": true, 00:14:23.971 "data_offset": 2048, 00:14:23.971 "data_size": 63488 00:14:23.971 } 00:14:23.971 ] 00:14:23.971 }' 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.971 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.538 [2024-12-06 13:08:30.821756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.538 "name": "Existed_Raid", 00:14:24.538 "aliases": [ 00:14:24.538 "9aa4fd79-c08d-4f19-bb32-326903031381" 00:14:24.538 ], 00:14:24.538 "product_name": "Raid Volume", 00:14:24.538 "block_size": 512, 00:14:24.538 "num_blocks": 63488, 00:14:24.538 "uuid": "9aa4fd79-c08d-4f19-bb32-326903031381", 00:14:24.538 "assigned_rate_limits": { 00:14:24.538 "rw_ios_per_sec": 0, 00:14:24.538 "rw_mbytes_per_sec": 0, 00:14:24.538 "r_mbytes_per_sec": 0, 00:14:24.538 "w_mbytes_per_sec": 0 00:14:24.538 }, 00:14:24.538 "claimed": false, 00:14:24.538 "zoned": false, 00:14:24.538 "supported_io_types": { 00:14:24.538 "read": true, 00:14:24.538 "write": true, 00:14:24.538 "unmap": false, 00:14:24.538 "flush": false, 00:14:24.538 "reset": true, 00:14:24.538 "nvme_admin": false, 00:14:24.538 "nvme_io": false, 00:14:24.538 "nvme_io_md": false, 00:14:24.538 "write_zeroes": true, 00:14:24.538 "zcopy": false, 00:14:24.538 "get_zone_info": false, 00:14:24.538 "zone_management": false, 00:14:24.538 "zone_append": false, 00:14:24.538 "compare": false, 00:14:24.538 "compare_and_write": false, 00:14:24.538 "abort": false, 00:14:24.538 "seek_hole": false, 00:14:24.538 "seek_data": false, 00:14:24.538 "copy": false, 00:14:24.538 "nvme_iov_md": false 00:14:24.538 }, 00:14:24.538 "memory_domains": [ 00:14:24.538 { 00:14:24.538 "dma_device_id": "system", 00:14:24.538 "dma_device_type": 1 00:14:24.538 }, 00:14:24.538 { 00:14:24.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.538 "dma_device_type": 2 00:14:24.538 }, 00:14:24.538 { 00:14:24.538 "dma_device_id": "system", 00:14:24.538 "dma_device_type": 1 00:14:24.538 }, 00:14:24.538 { 00:14:24.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.538 "dma_device_type": 2 00:14:24.538 } 00:14:24.538 ], 00:14:24.538 "driver_specific": { 00:14:24.538 "raid": { 00:14:24.538 "uuid": "9aa4fd79-c08d-4f19-bb32-326903031381", 00:14:24.538 "strip_size_kb": 0, 00:14:24.538 "state": "online", 00:14:24.538 "raid_level": "raid1", 00:14:24.538 "superblock": true, 00:14:24.538 "num_base_bdevs": 2, 00:14:24.538 "num_base_bdevs_discovered": 2, 00:14:24.538 "num_base_bdevs_operational": 2, 00:14:24.538 "base_bdevs_list": [ 00:14:24.538 { 00:14:24.538 "name": "BaseBdev1", 00:14:24.538 "uuid": "ef4450e5-a960-4c66-8324-afe225212569", 00:14:24.538 "is_configured": true, 00:14:24.538 "data_offset": 2048, 00:14:24.538 "data_size": 63488 00:14:24.538 }, 00:14:24.538 { 00:14:24.538 "name": "BaseBdev2", 00:14:24.538 "uuid": "8ec83fb9-d546-41ae-a10a-7bce69e54f59", 00:14:24.538 "is_configured": true, 00:14:24.538 "data_offset": 2048, 00:14:24.538 "data_size": 63488 00:14:24.538 } 00:14:24.538 ] 00:14:24.538 } 00:14:24.538 } 00:14:24.538 }' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:24.538 BaseBdev2' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.538 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 [2024-12-06 13:08:31.073483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:24.796 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.797 "name": "Existed_Raid", 00:14:24.797 "uuid": "9aa4fd79-c08d-4f19-bb32-326903031381", 00:14:24.797 "strip_size_kb": 0, 00:14:24.797 "state": "online", 00:14:24.797 "raid_level": "raid1", 00:14:24.797 "superblock": true, 00:14:24.797 "num_base_bdevs": 2, 00:14:24.797 "num_base_bdevs_discovered": 1, 00:14:24.797 "num_base_bdevs_operational": 1, 00:14:24.797 "base_bdevs_list": [ 00:14:24.797 { 00:14:24.797 "name": null, 00:14:24.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.797 "is_configured": false, 00:14:24.797 "data_offset": 0, 00:14:24.797 "data_size": 63488 00:14:24.797 }, 00:14:24.797 { 00:14:24.797 "name": "BaseBdev2", 00:14:24.797 "uuid": "8ec83fb9-d546-41ae-a10a-7bce69e54f59", 00:14:24.797 "is_configured": true, 00:14:24.797 "data_offset": 2048, 00:14:24.797 "data_size": 63488 00:14:24.797 } 00:14:24.797 ] 00:14:24.797 }' 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.797 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.362 [2024-12-06 13:08:31.713642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.362 [2024-12-06 13:08:31.713815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.362 [2024-12-06 13:08:31.809921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.362 [2024-12-06 13:08:31.810023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.362 [2024-12-06 13:08:31.810057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.362 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63171 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63171 ']' 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63171 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.363 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63171 00:14:25.621 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.621 killing process with pid 63171 00:14:25.621 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.621 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63171' 00:14:25.621 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63171 00:14:25.621 [2024-12-06 13:08:31.897143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.621 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63171 00:14:25.621 [2024-12-06 13:08:31.913070] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.565 13:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:26.565 00:14:26.565 real 0m5.604s 00:14:26.565 user 0m8.309s 00:14:26.565 sys 0m0.869s 00:14:26.565 13:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.565 13:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.565 ************************************ 00:14:26.565 END TEST raid_state_function_test_sb 00:14:26.565 ************************************ 00:14:26.824 13:08:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:26.824 13:08:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:26.824 13:08:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.824 13:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.824 ************************************ 00:14:26.824 START TEST raid_superblock_test 00:14:26.824 ************************************ 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63423 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63423 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63423 ']' 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.824 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.824 [2024-12-06 13:08:33.236628] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:26.824 [2024-12-06 13:08:33.236805] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63423 ] 00:14:27.082 [2024-12-06 13:08:33.419691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.082 [2024-12-06 13:08:33.589397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.340 [2024-12-06 13:08:33.817248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.340 [2024-12-06 13:08:33.817359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 malloc1 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 [2024-12-06 13:08:34.265568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.908 [2024-12-06 13:08:34.265709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.908 [2024-12-06 13:08:34.265757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.908 [2024-12-06 13:08:34.265773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.908 [2024-12-06 13:08:34.268946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.908 [2024-12-06 13:08:34.269006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.908 pt1 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 malloc2 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 [2024-12-06 13:08:34.321083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.908 [2024-12-06 13:08:34.321198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.908 [2024-12-06 13:08:34.321233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.908 [2024-12-06 13:08:34.321249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.908 [2024-12-06 13:08:34.324095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.908 [2024-12-06 13:08:34.324140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.908 pt2 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.908 [2024-12-06 13:08:34.329135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.908 [2024-12-06 13:08:34.331783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.908 [2024-12-06 13:08:34.331987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:27.908 [2024-12-06 13:08:34.332010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.908 [2024-12-06 13:08:34.332296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:27.908 [2024-12-06 13:08:34.332506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:27.908 [2024-12-06 13:08:34.332531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:27.908 [2024-12-06 13:08:34.332692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.908 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.909 "name": "raid_bdev1", 00:14:27.909 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:27.909 "strip_size_kb": 0, 00:14:27.909 "state": "online", 00:14:27.909 "raid_level": "raid1", 00:14:27.909 "superblock": true, 00:14:27.909 "num_base_bdevs": 2, 00:14:27.909 "num_base_bdevs_discovered": 2, 00:14:27.909 "num_base_bdevs_operational": 2, 00:14:27.909 "base_bdevs_list": [ 00:14:27.909 { 00:14:27.909 "name": "pt1", 00:14:27.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.909 "is_configured": true, 00:14:27.909 "data_offset": 2048, 00:14:27.909 "data_size": 63488 00:14:27.909 }, 00:14:27.909 { 00:14:27.909 "name": "pt2", 00:14:27.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.909 "is_configured": true, 00:14:27.909 "data_offset": 2048, 00:14:27.909 "data_size": 63488 00:14:27.909 } 00:14:27.909 ] 00:14:27.909 }' 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.909 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.475 [2024-12-06 13:08:34.861749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.475 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.475 "name": "raid_bdev1", 00:14:28.475 "aliases": [ 00:14:28.475 "e0860185-5932-43f1-acec-bd82cfb1c1e1" 00:14:28.475 ], 00:14:28.475 "product_name": "Raid Volume", 00:14:28.475 "block_size": 512, 00:14:28.475 "num_blocks": 63488, 00:14:28.475 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:28.475 "assigned_rate_limits": { 00:14:28.475 "rw_ios_per_sec": 0, 00:14:28.475 "rw_mbytes_per_sec": 0, 00:14:28.475 "r_mbytes_per_sec": 0, 00:14:28.475 "w_mbytes_per_sec": 0 00:14:28.475 }, 00:14:28.475 "claimed": false, 00:14:28.475 "zoned": false, 00:14:28.475 "supported_io_types": { 00:14:28.475 "read": true, 00:14:28.476 "write": true, 00:14:28.476 "unmap": false, 00:14:28.476 "flush": false, 00:14:28.476 "reset": true, 00:14:28.476 "nvme_admin": false, 00:14:28.476 "nvme_io": false, 00:14:28.476 "nvme_io_md": false, 00:14:28.476 "write_zeroes": true, 00:14:28.476 "zcopy": false, 00:14:28.476 "get_zone_info": false, 00:14:28.476 "zone_management": false, 00:14:28.476 "zone_append": false, 00:14:28.476 "compare": false, 00:14:28.476 "compare_and_write": false, 00:14:28.476 "abort": false, 00:14:28.476 "seek_hole": false, 00:14:28.476 "seek_data": false, 00:14:28.476 "copy": false, 00:14:28.476 "nvme_iov_md": false 00:14:28.476 }, 00:14:28.476 "memory_domains": [ 00:14:28.476 { 00:14:28.476 "dma_device_id": "system", 00:14:28.476 "dma_device_type": 1 00:14:28.476 }, 00:14:28.476 { 00:14:28.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.476 "dma_device_type": 2 00:14:28.476 }, 00:14:28.476 { 00:14:28.476 "dma_device_id": "system", 00:14:28.476 "dma_device_type": 1 00:14:28.476 }, 00:14:28.476 { 00:14:28.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.476 "dma_device_type": 2 00:14:28.476 } 00:14:28.476 ], 00:14:28.476 "driver_specific": { 00:14:28.476 "raid": { 00:14:28.476 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:28.476 "strip_size_kb": 0, 00:14:28.476 "state": "online", 00:14:28.476 "raid_level": "raid1", 00:14:28.476 "superblock": true, 00:14:28.476 "num_base_bdevs": 2, 00:14:28.476 "num_base_bdevs_discovered": 2, 00:14:28.476 "num_base_bdevs_operational": 2, 00:14:28.476 "base_bdevs_list": [ 00:14:28.476 { 00:14:28.476 "name": "pt1", 00:14:28.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.476 "is_configured": true, 00:14:28.476 "data_offset": 2048, 00:14:28.476 "data_size": 63488 00:14:28.476 }, 00:14:28.476 { 00:14:28.476 "name": "pt2", 00:14:28.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.476 "is_configured": true, 00:14:28.476 "data_offset": 2048, 00:14:28.476 "data_size": 63488 00:14:28.476 } 00:14:28.476 ] 00:14:28.476 } 00:14:28.476 } 00:14:28.476 }' 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:28.476 pt2' 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.476 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 [2024-12-06 13:08:35.109693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e0860185-5932-43f1-acec-bd82cfb1c1e1 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e0860185-5932-43f1-acec-bd82cfb1c1e1 ']' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 [2024-12-06 13:08:35.157328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.737 [2024-12-06 13:08:35.157355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.737 [2024-12-06 13:08:35.157509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.737 [2024-12-06 13:08:35.157594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.737 [2024-12-06 13:08:35.157644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:28.737 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.995 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.995 [2024-12-06 13:08:35.293400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:28.995 [2024-12-06 13:08:35.296149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:28.995 [2024-12-06 13:08:35.296251] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:28.995 [2024-12-06 13:08:35.296353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:28.995 [2024-12-06 13:08:35.296377] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.995 [2024-12-06 13:08:35.296391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:28.995 request: 00:14:28.995 { 00:14:28.995 "name": "raid_bdev1", 00:14:28.995 "raid_level": "raid1", 00:14:28.995 "base_bdevs": [ 00:14:28.995 "malloc1", 00:14:28.995 "malloc2" 00:14:28.995 ], 00:14:28.995 "superblock": false, 00:14:28.995 "method": "bdev_raid_create", 00:14:28.995 "req_id": 1 00:14:28.995 } 00:14:28.995 Got JSON-RPC error response 00:14:28.996 response: 00:14:28.996 { 00:14:28.996 "code": -17, 00:14:28.996 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:28.996 } 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.996 [2024-12-06 13:08:35.357477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.996 [2024-12-06 13:08:35.357582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.996 [2024-12-06 13:08:35.357607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:28.996 [2024-12-06 13:08:35.357623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.996 [2024-12-06 13:08:35.361029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.996 [2024-12-06 13:08:35.361108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.996 [2024-12-06 13:08:35.361215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:28.996 [2024-12-06 13:08:35.361284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.996 pt1 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.996 "name": "raid_bdev1", 00:14:28.996 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:28.996 "strip_size_kb": 0, 00:14:28.996 "state": "configuring", 00:14:28.996 "raid_level": "raid1", 00:14:28.996 "superblock": true, 00:14:28.996 "num_base_bdevs": 2, 00:14:28.996 "num_base_bdevs_discovered": 1, 00:14:28.996 "num_base_bdevs_operational": 2, 00:14:28.996 "base_bdevs_list": [ 00:14:28.996 { 00:14:28.996 "name": "pt1", 00:14:28.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.996 "is_configured": true, 00:14:28.996 "data_offset": 2048, 00:14:28.996 "data_size": 63488 00:14:28.996 }, 00:14:28.996 { 00:14:28.996 "name": null, 00:14:28.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.996 "is_configured": false, 00:14:28.996 "data_offset": 2048, 00:14:28.996 "data_size": 63488 00:14:28.996 } 00:14:28.996 ] 00:14:28.996 }' 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.996 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.562 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.562 [2024-12-06 13:08:35.869814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.562 [2024-12-06 13:08:35.870200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.563 [2024-12-06 13:08:35.870281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:29.563 [2024-12-06 13:08:35.870539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.563 [2024-12-06 13:08:35.871247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.563 [2024-12-06 13:08:35.871291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.563 [2024-12-06 13:08:35.871408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.563 [2024-12-06 13:08:35.871466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.563 [2024-12-06 13:08:35.871634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.563 [2024-12-06 13:08:35.871655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.563 [2024-12-06 13:08:35.871981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.563 [2024-12-06 13:08:35.872185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.563 [2024-12-06 13:08:35.872199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:29.563 [2024-12-06 13:08:35.872374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.563 pt2 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.563 "name": "raid_bdev1", 00:14:29.563 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:29.563 "strip_size_kb": 0, 00:14:29.563 "state": "online", 00:14:29.563 "raid_level": "raid1", 00:14:29.563 "superblock": true, 00:14:29.563 "num_base_bdevs": 2, 00:14:29.563 "num_base_bdevs_discovered": 2, 00:14:29.563 "num_base_bdevs_operational": 2, 00:14:29.563 "base_bdevs_list": [ 00:14:29.563 { 00:14:29.563 "name": "pt1", 00:14:29.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.563 "is_configured": true, 00:14:29.563 "data_offset": 2048, 00:14:29.563 "data_size": 63488 00:14:29.563 }, 00:14:29.563 { 00:14:29.563 "name": "pt2", 00:14:29.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.563 "is_configured": true, 00:14:29.563 "data_offset": 2048, 00:14:29.563 "data_size": 63488 00:14:29.563 } 00:14:29.563 ] 00:14:29.563 }' 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.563 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.131 [2024-12-06 13:08:36.430265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.131 "name": "raid_bdev1", 00:14:30.131 "aliases": [ 00:14:30.131 "e0860185-5932-43f1-acec-bd82cfb1c1e1" 00:14:30.131 ], 00:14:30.131 "product_name": "Raid Volume", 00:14:30.131 "block_size": 512, 00:14:30.131 "num_blocks": 63488, 00:14:30.131 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:30.131 "assigned_rate_limits": { 00:14:30.131 "rw_ios_per_sec": 0, 00:14:30.131 "rw_mbytes_per_sec": 0, 00:14:30.131 "r_mbytes_per_sec": 0, 00:14:30.131 "w_mbytes_per_sec": 0 00:14:30.131 }, 00:14:30.131 "claimed": false, 00:14:30.131 "zoned": false, 00:14:30.131 "supported_io_types": { 00:14:30.131 "read": true, 00:14:30.131 "write": true, 00:14:30.131 "unmap": false, 00:14:30.131 "flush": false, 00:14:30.131 "reset": true, 00:14:30.131 "nvme_admin": false, 00:14:30.131 "nvme_io": false, 00:14:30.131 "nvme_io_md": false, 00:14:30.131 "write_zeroes": true, 00:14:30.131 "zcopy": false, 00:14:30.131 "get_zone_info": false, 00:14:30.131 "zone_management": false, 00:14:30.131 "zone_append": false, 00:14:30.131 "compare": false, 00:14:30.131 "compare_and_write": false, 00:14:30.131 "abort": false, 00:14:30.131 "seek_hole": false, 00:14:30.131 "seek_data": false, 00:14:30.131 "copy": false, 00:14:30.131 "nvme_iov_md": false 00:14:30.131 }, 00:14:30.131 "memory_domains": [ 00:14:30.131 { 00:14:30.131 "dma_device_id": "system", 00:14:30.131 "dma_device_type": 1 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.131 "dma_device_type": 2 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "dma_device_id": "system", 00:14:30.131 "dma_device_type": 1 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.131 "dma_device_type": 2 00:14:30.131 } 00:14:30.131 ], 00:14:30.131 "driver_specific": { 00:14:30.131 "raid": { 00:14:30.131 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:30.131 "strip_size_kb": 0, 00:14:30.131 "state": "online", 00:14:30.131 "raid_level": "raid1", 00:14:30.131 "superblock": true, 00:14:30.131 "num_base_bdevs": 2, 00:14:30.131 "num_base_bdevs_discovered": 2, 00:14:30.131 "num_base_bdevs_operational": 2, 00:14:30.131 "base_bdevs_list": [ 00:14:30.131 { 00:14:30.131 "name": "pt1", 00:14:30.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.131 "is_configured": true, 00:14:30.131 "data_offset": 2048, 00:14:30.131 "data_size": 63488 00:14:30.131 }, 00:14:30.131 { 00:14:30.131 "name": "pt2", 00:14:30.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.131 "is_configured": true, 00:14:30.131 "data_offset": 2048, 00:14:30.131 "data_size": 63488 00:14:30.131 } 00:14:30.131 ] 00:14:30.131 } 00:14:30.131 } 00:14:30.131 }' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.131 pt2' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.131 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:30.390 [2024-12-06 13:08:36.674282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e0860185-5932-43f1-acec-bd82cfb1c1e1 '!=' e0860185-5932-43f1-acec-bd82cfb1c1e1 ']' 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.390 [2024-12-06 13:08:36.726063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.390 "name": "raid_bdev1", 00:14:30.390 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:30.390 "strip_size_kb": 0, 00:14:30.390 "state": "online", 00:14:30.390 "raid_level": "raid1", 00:14:30.390 "superblock": true, 00:14:30.390 "num_base_bdevs": 2, 00:14:30.390 "num_base_bdevs_discovered": 1, 00:14:30.390 "num_base_bdevs_operational": 1, 00:14:30.390 "base_bdevs_list": [ 00:14:30.390 { 00:14:30.390 "name": null, 00:14:30.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.390 "is_configured": false, 00:14:30.390 "data_offset": 0, 00:14:30.390 "data_size": 63488 00:14:30.390 }, 00:14:30.390 { 00:14:30.390 "name": "pt2", 00:14:30.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.390 "is_configured": true, 00:14:30.390 "data_offset": 2048, 00:14:30.390 "data_size": 63488 00:14:30.390 } 00:14:30.390 ] 00:14:30.390 }' 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.390 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 [2024-12-06 13:08:37.258292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.959 [2024-12-06 13:08:37.258337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.959 [2024-12-06 13:08:37.258451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.959 [2024-12-06 13:08:37.258582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.959 [2024-12-06 13:08:37.258604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 [2024-12-06 13:08:37.330291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.959 [2024-12-06 13:08:37.330549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.959 [2024-12-06 13:08:37.330698] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:30.959 [2024-12-06 13:08:37.330835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.959 [2024-12-06 13:08:37.334274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.959 [2024-12-06 13:08:37.334468] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.959 [2024-12-06 13:08:37.334720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.959 [2024-12-06 13:08:37.334888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.959 pt2 00:14:30.959 [2024-12-06 13:08:37.335177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:30.959 [2024-12-06 13:08:37.335208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.959 [2024-12-06 13:08:37.335598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:30.959 [2024-12-06 13:08:37.335809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.959 [2024-12-06 13:08:37.335826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:30.959 [2024-12-06 13:08:37.336023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.959 "name": "raid_bdev1", 00:14:30.959 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:30.959 "strip_size_kb": 0, 00:14:30.959 "state": "online", 00:14:30.959 "raid_level": "raid1", 00:14:30.959 "superblock": true, 00:14:30.959 "num_base_bdevs": 2, 00:14:30.959 "num_base_bdevs_discovered": 1, 00:14:30.959 "num_base_bdevs_operational": 1, 00:14:30.959 "base_bdevs_list": [ 00:14:30.959 { 00:14:30.959 "name": null, 00:14:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.959 "is_configured": false, 00:14:30.959 "data_offset": 2048, 00:14:30.959 "data_size": 63488 00:14:30.959 }, 00:14:30.959 { 00:14:30.959 "name": "pt2", 00:14:30.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.959 "is_configured": true, 00:14:30.959 "data_offset": 2048, 00:14:30.959 "data_size": 63488 00:14:30.959 } 00:14:30.959 ] 00:14:30.959 }' 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.959 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.527 [2024-12-06 13:08:37.871017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.527 [2024-12-06 13:08:37.871054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.527 [2024-12-06 13:08:37.871171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.527 [2024-12-06 13:08:37.871242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.527 [2024-12-06 13:08:37.871256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.527 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.528 [2024-12-06 13:08:37.943148] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.528 [2024-12-06 13:08:37.943414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.528 [2024-12-06 13:08:37.943525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:31.528 [2024-12-06 13:08:37.943751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.528 [2024-12-06 13:08:37.947262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.528 pt1 00:14:31.528 [2024-12-06 13:08:37.947478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.528 [2024-12-06 13:08:37.947622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.528 [2024-12-06 13:08:37.947686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.528 [2024-12-06 13:08:37.947961] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:31.528 [2024-12-06 13:08:37.948004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.528 [2024-12-06 13:08:37.948027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:31.528 [2024-12-06 13:08:37.948095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.528 [2024-12-06 13:08:37.948203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:31.528 [2024-12-06 13:08:37.948219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.528 [2024-12-06 13:08:37.948610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:31.528 [2024-12-06 13:08:37.948804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:31.528 [2024-12-06 13:08:37.948826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.528 [2024-12-06 13:08:37.949016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.528 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.528 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.528 "name": "raid_bdev1", 00:14:31.528 "uuid": "e0860185-5932-43f1-acec-bd82cfb1c1e1", 00:14:31.528 "strip_size_kb": 0, 00:14:31.528 "state": "online", 00:14:31.528 "raid_level": "raid1", 00:14:31.528 "superblock": true, 00:14:31.528 "num_base_bdevs": 2, 00:14:31.528 "num_base_bdevs_discovered": 1, 00:14:31.528 "num_base_bdevs_operational": 1, 00:14:31.528 "base_bdevs_list": [ 00:14:31.528 { 00:14:31.528 "name": null, 00:14:31.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.528 "is_configured": false, 00:14:31.528 "data_offset": 2048, 00:14:31.528 "data_size": 63488 00:14:31.528 }, 00:14:31.528 { 00:14:31.528 "name": "pt2", 00:14:31.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.528 "is_configured": true, 00:14:31.528 "data_offset": 2048, 00:14:31.528 "data_size": 63488 00:14:31.528 } 00:14:31.528 ] 00:14:31.528 }' 00:14:31.528 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.528 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.094 [2024-12-06 13:08:38.528159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e0860185-5932-43f1-acec-bd82cfb1c1e1 '!=' e0860185-5932-43f1-acec-bd82cfb1c1e1 ']' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63423 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63423 ']' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63423 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63423 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.094 killing process with pid 63423 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63423' 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63423 00:14:32.094 [2024-12-06 13:08:38.612611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.094 [2024-12-06 13:08:38.612733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.094 13:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63423 00:14:32.094 [2024-12-06 13:08:38.612801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.094 [2024-12-06 13:08:38.612824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:32.352 [2024-12-06 13:08:38.796393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.786 13:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:33.786 00:14:33.786 real 0m6.765s 00:14:33.786 user 0m10.628s 00:14:33.786 sys 0m1.019s 00:14:33.786 13:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.786 ************************************ 00:14:33.786 END TEST raid_superblock_test 00:14:33.786 ************************************ 00:14:33.786 13:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.786 13:08:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:14:33.786 13:08:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.786 13:08:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.786 13:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.786 ************************************ 00:14:33.786 START TEST raid_read_error_test 00:14:33.786 ************************************ 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Sx74OK242V 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63764 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63764 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63764 ']' 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.786 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.786 [2024-12-06 13:08:40.083309] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:33.786 [2024-12-06 13:08:40.083524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63764 ] 00:14:33.786 [2024-12-06 13:08:40.273091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.045 [2024-12-06 13:08:40.418558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.303 [2024-12-06 13:08:40.641180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.303 [2024-12-06 13:08:40.641284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.869 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 BaseBdev1_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 true 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 [2024-12-06 13:08:41.170586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:34.870 [2024-12-06 13:08:41.170676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.870 [2024-12-06 13:08:41.170707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:34.870 [2024-12-06 13:08:41.170727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.870 [2024-12-06 13:08:41.173739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.870 [2024-12-06 13:08:41.173821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.870 BaseBdev1 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 BaseBdev2_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 true 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 [2024-12-06 13:08:41.234568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:34.870 [2024-12-06 13:08:41.234659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.870 [2024-12-06 13:08:41.234685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:34.870 [2024-12-06 13:08:41.234702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.870 [2024-12-06 13:08:41.237661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.870 [2024-12-06 13:08:41.237724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.870 BaseBdev2 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 [2024-12-06 13:08:41.242729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.870 [2024-12-06 13:08:41.245272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.870 [2024-12-06 13:08:41.245608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:34.870 [2024-12-06 13:08:41.245639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:34.870 [2024-12-06 13:08:41.245957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.870 [2024-12-06 13:08:41.246263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:34.870 [2024-12-06 13:08:41.246291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:34.870 [2024-12-06 13:08:41.246514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.870 "name": "raid_bdev1", 00:14:34.870 "uuid": "f6f1f3a8-7f9c-4095-8491-d7c0b844833e", 00:14:34.870 "strip_size_kb": 0, 00:14:34.870 "state": "online", 00:14:34.870 "raid_level": "raid1", 00:14:34.870 "superblock": true, 00:14:34.870 "num_base_bdevs": 2, 00:14:34.870 "num_base_bdevs_discovered": 2, 00:14:34.870 "num_base_bdevs_operational": 2, 00:14:34.870 "base_bdevs_list": [ 00:14:34.870 { 00:14:34.870 "name": "BaseBdev1", 00:14:34.870 "uuid": "e8c11258-0c5b-54ea-81ea-7da97d68f913", 00:14:34.870 "is_configured": true, 00:14:34.870 "data_offset": 2048, 00:14:34.870 "data_size": 63488 00:14:34.870 }, 00:14:34.870 { 00:14:34.870 "name": "BaseBdev2", 00:14:34.870 "uuid": "9492dd5e-edfa-5cd4-9fcf-bc8f018b5c75", 00:14:34.870 "is_configured": true, 00:14:34.870 "data_offset": 2048, 00:14:34.870 "data_size": 63488 00:14:34.870 } 00:14:34.870 ] 00:14:34.870 }' 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.870 13:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.437 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:35.437 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.437 [2024-12-06 13:08:41.920289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.375 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.376 "name": "raid_bdev1", 00:14:36.376 "uuid": "f6f1f3a8-7f9c-4095-8491-d7c0b844833e", 00:14:36.376 "strip_size_kb": 0, 00:14:36.376 "state": "online", 00:14:36.376 "raid_level": "raid1", 00:14:36.376 "superblock": true, 00:14:36.376 "num_base_bdevs": 2, 00:14:36.376 "num_base_bdevs_discovered": 2, 00:14:36.376 "num_base_bdevs_operational": 2, 00:14:36.376 "base_bdevs_list": [ 00:14:36.376 { 00:14:36.376 "name": "BaseBdev1", 00:14:36.376 "uuid": "e8c11258-0c5b-54ea-81ea-7da97d68f913", 00:14:36.376 "is_configured": true, 00:14:36.376 "data_offset": 2048, 00:14:36.376 "data_size": 63488 00:14:36.376 }, 00:14:36.376 { 00:14:36.376 "name": "BaseBdev2", 00:14:36.376 "uuid": "9492dd5e-edfa-5cd4-9fcf-bc8f018b5c75", 00:14:36.376 "is_configured": true, 00:14:36.376 "data_offset": 2048, 00:14:36.376 "data_size": 63488 00:14:36.376 } 00:14:36.376 ] 00:14:36.376 }' 00:14:36.376 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.376 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.940 [2024-12-06 13:08:43.337357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.940 [2024-12-06 13:08:43.337423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.940 [2024-12-06 13:08:43.340923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.940 [2024-12-06 13:08:43.341003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.940 [2024-12-06 13:08:43.341144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.940 [2024-12-06 13:08:43.341164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:36.940 { 00:14:36.940 "results": [ 00:14:36.940 { 00:14:36.940 "job": "raid_bdev1", 00:14:36.940 "core_mask": "0x1", 00:14:36.940 "workload": "randrw", 00:14:36.940 "percentage": 50, 00:14:36.940 "status": "finished", 00:14:36.940 "queue_depth": 1, 00:14:36.940 "io_size": 131072, 00:14:36.940 "runtime": 1.414287, 00:14:36.940 "iops": 10452.616760247389, 00:14:36.940 "mibps": 1306.5770950309236, 00:14:36.940 "io_failed": 0, 00:14:36.940 "io_timeout": 0, 00:14:36.940 "avg_latency_us": 91.31850270273594, 00:14:36.940 "min_latency_us": 40.96, 00:14:36.940 "max_latency_us": 1936.290909090909 00:14:36.940 } 00:14:36.940 ], 00:14:36.940 "core_count": 1 00:14:36.940 } 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63764 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63764 ']' 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63764 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63764 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.940 killing process with pid 63764 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63764' 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63764 00:14:36.940 [2024-12-06 13:08:43.376494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.940 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63764 00:14:37.198 [2024-12-06 13:08:43.508292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Sx74OK242V 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:38.194 00:14:38.194 real 0m4.732s 00:14:38.194 user 0m5.872s 00:14:38.194 sys 0m0.680s 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.194 ************************************ 00:14:38.194 END TEST raid_read_error_test 00:14:38.194 ************************************ 00:14:38.194 13:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.452 13:08:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:14:38.452 13:08:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:38.452 13:08:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.452 13:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.452 ************************************ 00:14:38.452 START TEST raid_write_error_test 00:14:38.452 ************************************ 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.czE1NKwiFc 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63910 00:14:38.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.452 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63910 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63910 ']' 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.453 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.453 [2024-12-06 13:08:44.875096] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:38.453 [2024-12-06 13:08:44.875316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63910 ] 00:14:38.711 [2024-12-06 13:08:45.069647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.711 [2024-12-06 13:08:45.217810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.969 [2024-12-06 13:08:45.442061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.969 [2024-12-06 13:08:45.442135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.536 BaseBdev1_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.536 true 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.536 [2024-12-06 13:08:45.900203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:39.536 [2024-12-06 13:08:45.900570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.536 [2024-12-06 13:08:45.900613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:39.536 [2024-12-06 13:08:45.900645] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.536 [2024-12-06 13:08:45.903819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.536 [2024-12-06 13:08:45.903897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.536 BaseBdev1 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.536 BaseBdev2_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:39.536 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.537 true 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.537 [2024-12-06 13:08:45.965060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.537 [2024-12-06 13:08:45.965188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.537 [2024-12-06 13:08:45.965216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:39.537 [2024-12-06 13:08:45.965232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.537 [2024-12-06 13:08:45.968202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.537 [2024-12-06 13:08:45.968251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.537 BaseBdev2 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.537 [2024-12-06 13:08:45.973211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.537 [2024-12-06 13:08:45.975900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.537 [2024-12-06 13:08:45.976158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.537 [2024-12-06 13:08:45.976180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.537 [2024-12-06 13:08:45.976504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:39.537 [2024-12-06 13:08:45.976750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.537 [2024-12-06 13:08:45.976766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:39.537 [2024-12-06 13:08:45.977077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.537 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.537 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.537 "name": "raid_bdev1", 00:14:39.537 "uuid": "b52595cf-5aa1-47e3-aa2b-ea4570b17e7b", 00:14:39.537 "strip_size_kb": 0, 00:14:39.537 "state": "online", 00:14:39.537 "raid_level": "raid1", 00:14:39.537 "superblock": true, 00:14:39.537 "num_base_bdevs": 2, 00:14:39.537 "num_base_bdevs_discovered": 2, 00:14:39.537 "num_base_bdevs_operational": 2, 00:14:39.537 "base_bdevs_list": [ 00:14:39.537 { 00:14:39.537 "name": "BaseBdev1", 00:14:39.537 "uuid": "3efa6569-ee50-5e62-a1a5-ed69e7a11574", 00:14:39.537 "is_configured": true, 00:14:39.537 "data_offset": 2048, 00:14:39.537 "data_size": 63488 00:14:39.537 }, 00:14:39.537 { 00:14:39.537 "name": "BaseBdev2", 00:14:39.537 "uuid": "77b3e77a-d63e-5405-b057-bff916818067", 00:14:39.537 "is_configured": true, 00:14:39.537 "data_offset": 2048, 00:14:39.537 "data_size": 63488 00:14:39.537 } 00:14:39.537 ] 00:14:39.537 }' 00:14:39.537 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.537 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.103 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:40.103 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:40.361 [2024-12-06 13:08:46.663125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.295 [2024-12-06 13:08:47.537712] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:41.295 [2024-12-06 13:08:47.537975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.295 [2024-12-06 13:08:47.538251] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.295 "name": "raid_bdev1", 00:14:41.295 "uuid": "b52595cf-5aa1-47e3-aa2b-ea4570b17e7b", 00:14:41.295 "strip_size_kb": 0, 00:14:41.295 "state": "online", 00:14:41.295 "raid_level": "raid1", 00:14:41.295 "superblock": true, 00:14:41.295 "num_base_bdevs": 2, 00:14:41.295 "num_base_bdevs_discovered": 1, 00:14:41.295 "num_base_bdevs_operational": 1, 00:14:41.295 "base_bdevs_list": [ 00:14:41.295 { 00:14:41.295 "name": null, 00:14:41.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.295 "is_configured": false, 00:14:41.295 "data_offset": 0, 00:14:41.295 "data_size": 63488 00:14:41.295 }, 00:14:41.295 { 00:14:41.295 "name": "BaseBdev2", 00:14:41.295 "uuid": "77b3e77a-d63e-5405-b057-bff916818067", 00:14:41.295 "is_configured": true, 00:14:41.295 "data_offset": 2048, 00:14:41.295 "data_size": 63488 00:14:41.295 } 00:14:41.295 ] 00:14:41.295 }' 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.295 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.560 [2024-12-06 13:08:48.053275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.560 [2024-12-06 13:08:48.053462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.560 [2024-12-06 13:08:48.056916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.560 [2024-12-06 13:08:48.057095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.560 [2024-12-06 13:08:48.057262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.560 [2024-12-06 13:08:48.057285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:41.560 { 00:14:41.560 "results": [ 00:14:41.560 { 00:14:41.560 "job": "raid_bdev1", 00:14:41.560 "core_mask": "0x1", 00:14:41.560 "workload": "randrw", 00:14:41.560 "percentage": 50, 00:14:41.560 "status": "finished", 00:14:41.560 "queue_depth": 1, 00:14:41.560 "io_size": 131072, 00:14:41.560 "runtime": 1.387312, 00:14:41.560 "iops": 13256.57098042834, 00:14:41.560 "mibps": 1657.0713725535425, 00:14:41.560 "io_failed": 0, 00:14:41.560 "io_timeout": 0, 00:14:41.560 "avg_latency_us": 71.22185970410428, 00:14:41.560 "min_latency_us": 41.192727272727275, 00:14:41.560 "max_latency_us": 1802.24 00:14:41.560 } 00:14:41.560 ], 00:14:41.560 "core_count": 1 00:14:41.560 } 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63910 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63910 ']' 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63910 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.560 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63910 00:14:41.819 killing process with pid 63910 00:14:41.819 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.819 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.819 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63910' 00:14:41.819 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63910 00:14:41.819 [2024-12-06 13:08:48.099590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.819 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63910 00:14:41.819 [2024-12-06 13:08:48.229863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.czE1NKwiFc 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:43.196 00:14:43.196 real 0m4.684s 00:14:43.196 user 0m5.764s 00:14:43.196 sys 0m0.664s 00:14:43.196 ************************************ 00:14:43.196 END TEST raid_write_error_test 00:14:43.196 ************************************ 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.196 13:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.196 13:08:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:43.196 13:08:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:43.196 13:08:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:43.196 13:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.196 13:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.196 13:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.196 ************************************ 00:14:43.196 START TEST raid_state_function_test 00:14:43.196 ************************************ 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.196 Process raid pid: 64048 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64048 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64048' 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64048 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64048 ']' 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.196 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.196 [2024-12-06 13:08:49.582090] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:43.196 [2024-12-06 13:08:49.583092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.454 [2024-12-06 13:08:49.772010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.454 [2024-12-06 13:08:49.909993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.711 [2024-12-06 13:08:50.117319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.711 [2024-12-06 13:08:50.117596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.277 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.277 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:44.277 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.277 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.277 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.278 [2024-12-06 13:08:50.616589] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.278 [2024-12-06 13:08:50.616843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.278 [2024-12-06 13:08:50.616875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.278 [2024-12-06 13:08:50.616894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.278 [2024-12-06 13:08:50.616905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.278 [2024-12-06 13:08:50.616920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.278 "name": "Existed_Raid", 00:14:44.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.278 "strip_size_kb": 64, 00:14:44.278 "state": "configuring", 00:14:44.278 "raid_level": "raid0", 00:14:44.278 "superblock": false, 00:14:44.278 "num_base_bdevs": 3, 00:14:44.278 "num_base_bdevs_discovered": 0, 00:14:44.278 "num_base_bdevs_operational": 3, 00:14:44.278 "base_bdevs_list": [ 00:14:44.278 { 00:14:44.278 "name": "BaseBdev1", 00:14:44.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.278 "is_configured": false, 00:14:44.278 "data_offset": 0, 00:14:44.278 "data_size": 0 00:14:44.278 }, 00:14:44.278 { 00:14:44.278 "name": "BaseBdev2", 00:14:44.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.278 "is_configured": false, 00:14:44.278 "data_offset": 0, 00:14:44.278 "data_size": 0 00:14:44.278 }, 00:14:44.278 { 00:14:44.278 "name": "BaseBdev3", 00:14:44.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.278 "is_configured": false, 00:14:44.278 "data_offset": 0, 00:14:44.278 "data_size": 0 00:14:44.278 } 00:14:44.278 ] 00:14:44.278 }' 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.278 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 [2024-12-06 13:08:51.128643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.845 [2024-12-06 13:08:51.128922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 [2024-12-06 13:08:51.136621] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.845 [2024-12-06 13:08:51.136831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.845 [2024-12-06 13:08:51.136858] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.845 [2024-12-06 13:08:51.136883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.845 [2024-12-06 13:08:51.136893] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.845 [2024-12-06 13:08:51.136908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 [2024-12-06 13:08:51.185323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.845 BaseBdev1 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.845 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.845 [ 00:14:44.845 { 00:14:44.845 "name": "BaseBdev1", 00:14:44.845 "aliases": [ 00:14:44.845 "fbc92876-1750-4024-bfe9-1c084bb0cd0e" 00:14:44.845 ], 00:14:44.845 "product_name": "Malloc disk", 00:14:44.845 "block_size": 512, 00:14:44.845 "num_blocks": 65536, 00:14:44.845 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:44.845 "assigned_rate_limits": { 00:14:44.845 "rw_ios_per_sec": 0, 00:14:44.845 "rw_mbytes_per_sec": 0, 00:14:44.845 "r_mbytes_per_sec": 0, 00:14:44.845 "w_mbytes_per_sec": 0 00:14:44.845 }, 00:14:44.845 "claimed": true, 00:14:44.846 "claim_type": "exclusive_write", 00:14:44.846 "zoned": false, 00:14:44.846 "supported_io_types": { 00:14:44.846 "read": true, 00:14:44.846 "write": true, 00:14:44.846 "unmap": true, 00:14:44.846 "flush": true, 00:14:44.846 "reset": true, 00:14:44.846 "nvme_admin": false, 00:14:44.846 "nvme_io": false, 00:14:44.846 "nvme_io_md": false, 00:14:44.846 "write_zeroes": true, 00:14:44.846 "zcopy": true, 00:14:44.846 "get_zone_info": false, 00:14:44.846 "zone_management": false, 00:14:44.846 "zone_append": false, 00:14:44.846 "compare": false, 00:14:44.846 "compare_and_write": false, 00:14:44.846 "abort": true, 00:14:44.846 "seek_hole": false, 00:14:44.846 "seek_data": false, 00:14:44.846 "copy": true, 00:14:44.846 "nvme_iov_md": false 00:14:44.846 }, 00:14:44.846 "memory_domains": [ 00:14:44.846 { 00:14:44.846 "dma_device_id": "system", 00:14:44.846 "dma_device_type": 1 00:14:44.846 }, 00:14:44.846 { 00:14:44.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.846 "dma_device_type": 2 00:14:44.846 } 00:14:44.846 ], 00:14:44.846 "driver_specific": {} 00:14:44.846 } 00:14:44.846 ] 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.846 "name": "Existed_Raid", 00:14:44.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.846 "strip_size_kb": 64, 00:14:44.846 "state": "configuring", 00:14:44.846 "raid_level": "raid0", 00:14:44.846 "superblock": false, 00:14:44.846 "num_base_bdevs": 3, 00:14:44.846 "num_base_bdevs_discovered": 1, 00:14:44.846 "num_base_bdevs_operational": 3, 00:14:44.846 "base_bdevs_list": [ 00:14:44.846 { 00:14:44.846 "name": "BaseBdev1", 00:14:44.846 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:44.846 "is_configured": true, 00:14:44.846 "data_offset": 0, 00:14:44.846 "data_size": 65536 00:14:44.846 }, 00:14:44.846 { 00:14:44.846 "name": "BaseBdev2", 00:14:44.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.846 "is_configured": false, 00:14:44.846 "data_offset": 0, 00:14:44.846 "data_size": 0 00:14:44.846 }, 00:14:44.846 { 00:14:44.846 "name": "BaseBdev3", 00:14:44.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.846 "is_configured": false, 00:14:44.846 "data_offset": 0, 00:14:44.846 "data_size": 0 00:14:44.846 } 00:14:44.846 ] 00:14:44.846 }' 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.846 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.413 [2024-12-06 13:08:51.745584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.413 [2024-12-06 13:08:51.745931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.413 [2024-12-06 13:08:51.753595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.413 [2024-12-06 13:08:51.756092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.413 [2024-12-06 13:08:51.756200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.413 [2024-12-06 13:08:51.756219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.413 [2024-12-06 13:08:51.756236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.413 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.413 "name": "Existed_Raid", 00:14:45.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.413 "strip_size_kb": 64, 00:14:45.413 "state": "configuring", 00:14:45.413 "raid_level": "raid0", 00:14:45.413 "superblock": false, 00:14:45.413 "num_base_bdevs": 3, 00:14:45.413 "num_base_bdevs_discovered": 1, 00:14:45.413 "num_base_bdevs_operational": 3, 00:14:45.413 "base_bdevs_list": [ 00:14:45.413 { 00:14:45.413 "name": "BaseBdev1", 00:14:45.413 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:45.413 "is_configured": true, 00:14:45.413 "data_offset": 0, 00:14:45.413 "data_size": 65536 00:14:45.413 }, 00:14:45.413 { 00:14:45.413 "name": "BaseBdev2", 00:14:45.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.413 "is_configured": false, 00:14:45.413 "data_offset": 0, 00:14:45.413 "data_size": 0 00:14:45.413 }, 00:14:45.413 { 00:14:45.413 "name": "BaseBdev3", 00:14:45.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.413 "is_configured": false, 00:14:45.413 "data_offset": 0, 00:14:45.413 "data_size": 0 00:14:45.413 } 00:14:45.414 ] 00:14:45.414 }' 00:14:45.414 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.414 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.979 [2024-12-06 13:08:52.271753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.979 BaseBdev2 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.979 [ 00:14:45.979 { 00:14:45.979 "name": "BaseBdev2", 00:14:45.979 "aliases": [ 00:14:45.979 "2c4642a8-9523-4370-b9e1-82defecb169e" 00:14:45.979 ], 00:14:45.979 "product_name": "Malloc disk", 00:14:45.979 "block_size": 512, 00:14:45.979 "num_blocks": 65536, 00:14:45.979 "uuid": "2c4642a8-9523-4370-b9e1-82defecb169e", 00:14:45.979 "assigned_rate_limits": { 00:14:45.979 "rw_ios_per_sec": 0, 00:14:45.979 "rw_mbytes_per_sec": 0, 00:14:45.979 "r_mbytes_per_sec": 0, 00:14:45.979 "w_mbytes_per_sec": 0 00:14:45.979 }, 00:14:45.979 "claimed": true, 00:14:45.979 "claim_type": "exclusive_write", 00:14:45.979 "zoned": false, 00:14:45.979 "supported_io_types": { 00:14:45.979 "read": true, 00:14:45.979 "write": true, 00:14:45.979 "unmap": true, 00:14:45.979 "flush": true, 00:14:45.979 "reset": true, 00:14:45.979 "nvme_admin": false, 00:14:45.979 "nvme_io": false, 00:14:45.979 "nvme_io_md": false, 00:14:45.979 "write_zeroes": true, 00:14:45.979 "zcopy": true, 00:14:45.979 "get_zone_info": false, 00:14:45.979 "zone_management": false, 00:14:45.979 "zone_append": false, 00:14:45.979 "compare": false, 00:14:45.979 "compare_and_write": false, 00:14:45.979 "abort": true, 00:14:45.979 "seek_hole": false, 00:14:45.979 "seek_data": false, 00:14:45.979 "copy": true, 00:14:45.979 "nvme_iov_md": false 00:14:45.979 }, 00:14:45.979 "memory_domains": [ 00:14:45.979 { 00:14:45.979 "dma_device_id": "system", 00:14:45.979 "dma_device_type": 1 00:14:45.979 }, 00:14:45.979 { 00:14:45.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.979 "dma_device_type": 2 00:14:45.979 } 00:14:45.979 ], 00:14:45.979 "driver_specific": {} 00:14:45.979 } 00:14:45.979 ] 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.979 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.980 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.980 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.980 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.980 "name": "Existed_Raid", 00:14:45.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.980 "strip_size_kb": 64, 00:14:45.980 "state": "configuring", 00:14:45.980 "raid_level": "raid0", 00:14:45.980 "superblock": false, 00:14:45.980 "num_base_bdevs": 3, 00:14:45.980 "num_base_bdevs_discovered": 2, 00:14:45.980 "num_base_bdevs_operational": 3, 00:14:45.980 "base_bdevs_list": [ 00:14:45.980 { 00:14:45.980 "name": "BaseBdev1", 00:14:45.980 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:45.980 "is_configured": true, 00:14:45.980 "data_offset": 0, 00:14:45.980 "data_size": 65536 00:14:45.980 }, 00:14:45.980 { 00:14:45.980 "name": "BaseBdev2", 00:14:45.980 "uuid": "2c4642a8-9523-4370-b9e1-82defecb169e", 00:14:45.980 "is_configured": true, 00:14:45.980 "data_offset": 0, 00:14:45.980 "data_size": 65536 00:14:45.980 }, 00:14:45.980 { 00:14:45.980 "name": "BaseBdev3", 00:14:45.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.980 "is_configured": false, 00:14:45.980 "data_offset": 0, 00:14:45.980 "data_size": 0 00:14:45.980 } 00:14:45.980 ] 00:14:45.980 }' 00:14:45.980 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.980 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 [2024-12-06 13:08:52.850022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.546 [2024-12-06 13:08:52.850087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.546 [2024-12-06 13:08:52.850110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:46.546 [2024-12-06 13:08:52.850512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:46.546 [2024-12-06 13:08:52.850774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.546 [2024-12-06 13:08:52.850803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:46.546 [2024-12-06 13:08:52.851146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.546 BaseBdev3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 [ 00:14:46.546 { 00:14:46.546 "name": "BaseBdev3", 00:14:46.546 "aliases": [ 00:14:46.546 "e223d683-c084-4dc9-a683-971596d94ea5" 00:14:46.546 ], 00:14:46.546 "product_name": "Malloc disk", 00:14:46.546 "block_size": 512, 00:14:46.546 "num_blocks": 65536, 00:14:46.546 "uuid": "e223d683-c084-4dc9-a683-971596d94ea5", 00:14:46.546 "assigned_rate_limits": { 00:14:46.546 "rw_ios_per_sec": 0, 00:14:46.546 "rw_mbytes_per_sec": 0, 00:14:46.546 "r_mbytes_per_sec": 0, 00:14:46.546 "w_mbytes_per_sec": 0 00:14:46.546 }, 00:14:46.546 "claimed": true, 00:14:46.546 "claim_type": "exclusive_write", 00:14:46.546 "zoned": false, 00:14:46.546 "supported_io_types": { 00:14:46.546 "read": true, 00:14:46.546 "write": true, 00:14:46.546 "unmap": true, 00:14:46.546 "flush": true, 00:14:46.546 "reset": true, 00:14:46.546 "nvme_admin": false, 00:14:46.546 "nvme_io": false, 00:14:46.546 "nvme_io_md": false, 00:14:46.546 "write_zeroes": true, 00:14:46.546 "zcopy": true, 00:14:46.546 "get_zone_info": false, 00:14:46.546 "zone_management": false, 00:14:46.546 "zone_append": false, 00:14:46.546 "compare": false, 00:14:46.546 "compare_and_write": false, 00:14:46.546 "abort": true, 00:14:46.546 "seek_hole": false, 00:14:46.546 "seek_data": false, 00:14:46.546 "copy": true, 00:14:46.546 "nvme_iov_md": false 00:14:46.546 }, 00:14:46.546 "memory_domains": [ 00:14:46.546 { 00:14:46.546 "dma_device_id": "system", 00:14:46.546 "dma_device_type": 1 00:14:46.546 }, 00:14:46.546 { 00:14:46.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.546 "dma_device_type": 2 00:14:46.546 } 00:14:46.546 ], 00:14:46.546 "driver_specific": {} 00:14:46.546 } 00:14:46.546 ] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.546 "name": "Existed_Raid", 00:14:46.546 "uuid": "53816cd4-2cc2-4c35-b2c5-4a21101bff57", 00:14:46.546 "strip_size_kb": 64, 00:14:46.546 "state": "online", 00:14:46.546 "raid_level": "raid0", 00:14:46.546 "superblock": false, 00:14:46.546 "num_base_bdevs": 3, 00:14:46.546 "num_base_bdevs_discovered": 3, 00:14:46.546 "num_base_bdevs_operational": 3, 00:14:46.546 "base_bdevs_list": [ 00:14:46.546 { 00:14:46.546 "name": "BaseBdev1", 00:14:46.546 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:46.546 "is_configured": true, 00:14:46.546 "data_offset": 0, 00:14:46.546 "data_size": 65536 00:14:46.546 }, 00:14:46.546 { 00:14:46.546 "name": "BaseBdev2", 00:14:46.546 "uuid": "2c4642a8-9523-4370-b9e1-82defecb169e", 00:14:46.546 "is_configured": true, 00:14:46.546 "data_offset": 0, 00:14:46.546 "data_size": 65536 00:14:46.546 }, 00:14:46.546 { 00:14:46.546 "name": "BaseBdev3", 00:14:46.546 "uuid": "e223d683-c084-4dc9-a683-971596d94ea5", 00:14:46.546 "is_configured": true, 00:14:46.546 "data_offset": 0, 00:14:46.546 "data_size": 65536 00:14:46.546 } 00:14:46.546 ] 00:14:46.546 }' 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.546 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.113 [2024-12-06 13:08:53.370701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.113 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.113 "name": "Existed_Raid", 00:14:47.113 "aliases": [ 00:14:47.113 "53816cd4-2cc2-4c35-b2c5-4a21101bff57" 00:14:47.113 ], 00:14:47.113 "product_name": "Raid Volume", 00:14:47.113 "block_size": 512, 00:14:47.113 "num_blocks": 196608, 00:14:47.113 "uuid": "53816cd4-2cc2-4c35-b2c5-4a21101bff57", 00:14:47.113 "assigned_rate_limits": { 00:14:47.113 "rw_ios_per_sec": 0, 00:14:47.113 "rw_mbytes_per_sec": 0, 00:14:47.113 "r_mbytes_per_sec": 0, 00:14:47.113 "w_mbytes_per_sec": 0 00:14:47.113 }, 00:14:47.113 "claimed": false, 00:14:47.113 "zoned": false, 00:14:47.113 "supported_io_types": { 00:14:47.113 "read": true, 00:14:47.113 "write": true, 00:14:47.113 "unmap": true, 00:14:47.113 "flush": true, 00:14:47.113 "reset": true, 00:14:47.113 "nvme_admin": false, 00:14:47.113 "nvme_io": false, 00:14:47.113 "nvme_io_md": false, 00:14:47.113 "write_zeroes": true, 00:14:47.113 "zcopy": false, 00:14:47.113 "get_zone_info": false, 00:14:47.113 "zone_management": false, 00:14:47.113 "zone_append": false, 00:14:47.113 "compare": false, 00:14:47.113 "compare_and_write": false, 00:14:47.113 "abort": false, 00:14:47.113 "seek_hole": false, 00:14:47.113 "seek_data": false, 00:14:47.113 "copy": false, 00:14:47.113 "nvme_iov_md": false 00:14:47.113 }, 00:14:47.113 "memory_domains": [ 00:14:47.113 { 00:14:47.113 "dma_device_id": "system", 00:14:47.113 "dma_device_type": 1 00:14:47.113 }, 00:14:47.113 { 00:14:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.113 "dma_device_type": 2 00:14:47.113 }, 00:14:47.113 { 00:14:47.113 "dma_device_id": "system", 00:14:47.113 "dma_device_type": 1 00:14:47.113 }, 00:14:47.113 { 00:14:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.113 "dma_device_type": 2 00:14:47.113 }, 00:14:47.113 { 00:14:47.113 "dma_device_id": "system", 00:14:47.113 "dma_device_type": 1 00:14:47.113 }, 00:14:47.113 { 00:14:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.113 "dma_device_type": 2 00:14:47.113 } 00:14:47.113 ], 00:14:47.113 "driver_specific": { 00:14:47.113 "raid": { 00:14:47.113 "uuid": "53816cd4-2cc2-4c35-b2c5-4a21101bff57", 00:14:47.114 "strip_size_kb": 64, 00:14:47.114 "state": "online", 00:14:47.114 "raid_level": "raid0", 00:14:47.114 "superblock": false, 00:14:47.114 "num_base_bdevs": 3, 00:14:47.114 "num_base_bdevs_discovered": 3, 00:14:47.114 "num_base_bdevs_operational": 3, 00:14:47.114 "base_bdevs_list": [ 00:14:47.114 { 00:14:47.114 "name": "BaseBdev1", 00:14:47.114 "uuid": "fbc92876-1750-4024-bfe9-1c084bb0cd0e", 00:14:47.114 "is_configured": true, 00:14:47.114 "data_offset": 0, 00:14:47.114 "data_size": 65536 00:14:47.114 }, 00:14:47.114 { 00:14:47.114 "name": "BaseBdev2", 00:14:47.114 "uuid": "2c4642a8-9523-4370-b9e1-82defecb169e", 00:14:47.114 "is_configured": true, 00:14:47.114 "data_offset": 0, 00:14:47.114 "data_size": 65536 00:14:47.114 }, 00:14:47.114 { 00:14:47.114 "name": "BaseBdev3", 00:14:47.114 "uuid": "e223d683-c084-4dc9-a683-971596d94ea5", 00:14:47.114 "is_configured": true, 00:14:47.114 "data_offset": 0, 00:14:47.114 "data_size": 65536 00:14:47.114 } 00:14:47.114 ] 00:14:47.114 } 00:14:47.114 } 00:14:47.114 }' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.114 BaseBdev2 00:14:47.114 BaseBdev3' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.114 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.372 [2024-12-06 13:08:53.690439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.372 [2024-12-06 13:08:53.690511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.372 [2024-12-06 13:08:53.690592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.372 "name": "Existed_Raid", 00:14:47.372 "uuid": "53816cd4-2cc2-4c35-b2c5-4a21101bff57", 00:14:47.372 "strip_size_kb": 64, 00:14:47.372 "state": "offline", 00:14:47.372 "raid_level": "raid0", 00:14:47.372 "superblock": false, 00:14:47.372 "num_base_bdevs": 3, 00:14:47.372 "num_base_bdevs_discovered": 2, 00:14:47.372 "num_base_bdevs_operational": 2, 00:14:47.372 "base_bdevs_list": [ 00:14:47.372 { 00:14:47.372 "name": null, 00:14:47.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.372 "is_configured": false, 00:14:47.372 "data_offset": 0, 00:14:47.372 "data_size": 65536 00:14:47.372 }, 00:14:47.372 { 00:14:47.372 "name": "BaseBdev2", 00:14:47.372 "uuid": "2c4642a8-9523-4370-b9e1-82defecb169e", 00:14:47.372 "is_configured": true, 00:14:47.372 "data_offset": 0, 00:14:47.372 "data_size": 65536 00:14:47.372 }, 00:14:47.372 { 00:14:47.372 "name": "BaseBdev3", 00:14:47.372 "uuid": "e223d683-c084-4dc9-a683-971596d94ea5", 00:14:47.372 "is_configured": true, 00:14:47.372 "data_offset": 0, 00:14:47.372 "data_size": 65536 00:14:47.372 } 00:14:47.372 ] 00:14:47.372 }' 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.372 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 [2024-12-06 13:08:54.325265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.938 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.939 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.196 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.196 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.196 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.196 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 [2024-12-06 13:08:54.470133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.197 [2024-12-06 13:08:54.470231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 BaseBdev2 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 [ 00:14:48.197 { 00:14:48.197 "name": "BaseBdev2", 00:14:48.197 "aliases": [ 00:14:48.197 "8b7e430b-233e-402e-bc9e-71fe8633805b" 00:14:48.197 ], 00:14:48.197 "product_name": "Malloc disk", 00:14:48.197 "block_size": 512, 00:14:48.197 "num_blocks": 65536, 00:14:48.197 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:48.197 "assigned_rate_limits": { 00:14:48.197 "rw_ios_per_sec": 0, 00:14:48.197 "rw_mbytes_per_sec": 0, 00:14:48.197 "r_mbytes_per_sec": 0, 00:14:48.197 "w_mbytes_per_sec": 0 00:14:48.197 }, 00:14:48.197 "claimed": false, 00:14:48.197 "zoned": false, 00:14:48.197 "supported_io_types": { 00:14:48.197 "read": true, 00:14:48.197 "write": true, 00:14:48.197 "unmap": true, 00:14:48.197 "flush": true, 00:14:48.197 "reset": true, 00:14:48.197 "nvme_admin": false, 00:14:48.197 "nvme_io": false, 00:14:48.197 "nvme_io_md": false, 00:14:48.197 "write_zeroes": true, 00:14:48.197 "zcopy": true, 00:14:48.197 "get_zone_info": false, 00:14:48.197 "zone_management": false, 00:14:48.197 "zone_append": false, 00:14:48.197 "compare": false, 00:14:48.197 "compare_and_write": false, 00:14:48.197 "abort": true, 00:14:48.197 "seek_hole": false, 00:14:48.197 "seek_data": false, 00:14:48.197 "copy": true, 00:14:48.197 "nvme_iov_md": false 00:14:48.197 }, 00:14:48.197 "memory_domains": [ 00:14:48.197 { 00:14:48.197 "dma_device_id": "system", 00:14:48.197 "dma_device_type": 1 00:14:48.197 }, 00:14:48.197 { 00:14:48.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.197 "dma_device_type": 2 00:14:48.197 } 00:14:48.197 ], 00:14:48.197 "driver_specific": {} 00:14:48.197 } 00:14:48.197 ] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.197 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.455 BaseBdev3 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.455 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.456 [ 00:14:48.456 { 00:14:48.456 "name": "BaseBdev3", 00:14:48.456 "aliases": [ 00:14:48.456 "e4259554-5918-485c-a3bf-bd0476345c39" 00:14:48.456 ], 00:14:48.456 "product_name": "Malloc disk", 00:14:48.456 "block_size": 512, 00:14:48.456 "num_blocks": 65536, 00:14:48.456 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:48.456 "assigned_rate_limits": { 00:14:48.456 "rw_ios_per_sec": 0, 00:14:48.456 "rw_mbytes_per_sec": 0, 00:14:48.456 "r_mbytes_per_sec": 0, 00:14:48.456 "w_mbytes_per_sec": 0 00:14:48.456 }, 00:14:48.456 "claimed": false, 00:14:48.456 "zoned": false, 00:14:48.456 "supported_io_types": { 00:14:48.456 "read": true, 00:14:48.456 "write": true, 00:14:48.456 "unmap": true, 00:14:48.456 "flush": true, 00:14:48.456 "reset": true, 00:14:48.456 "nvme_admin": false, 00:14:48.456 "nvme_io": false, 00:14:48.456 "nvme_io_md": false, 00:14:48.456 "write_zeroes": true, 00:14:48.456 "zcopy": true, 00:14:48.456 "get_zone_info": false, 00:14:48.456 "zone_management": false, 00:14:48.456 "zone_append": false, 00:14:48.456 "compare": false, 00:14:48.456 "compare_and_write": false, 00:14:48.456 "abort": true, 00:14:48.456 "seek_hole": false, 00:14:48.456 "seek_data": false, 00:14:48.456 "copy": true, 00:14:48.456 "nvme_iov_md": false 00:14:48.456 }, 00:14:48.456 "memory_domains": [ 00:14:48.456 { 00:14:48.456 "dma_device_id": "system", 00:14:48.456 "dma_device_type": 1 00:14:48.456 }, 00:14:48.456 { 00:14:48.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.456 "dma_device_type": 2 00:14:48.456 } 00:14:48.456 ], 00:14:48.456 "driver_specific": {} 00:14:48.456 } 00:14:48.456 ] 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.456 [2024-12-06 13:08:54.753711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.456 [2024-12-06 13:08:54.753785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.456 [2024-12-06 13:08:54.753819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.456 [2024-12-06 13:08:54.756336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.456 "name": "Existed_Raid", 00:14:48.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.456 "strip_size_kb": 64, 00:14:48.456 "state": "configuring", 00:14:48.456 "raid_level": "raid0", 00:14:48.456 "superblock": false, 00:14:48.456 "num_base_bdevs": 3, 00:14:48.456 "num_base_bdevs_discovered": 2, 00:14:48.456 "num_base_bdevs_operational": 3, 00:14:48.456 "base_bdevs_list": [ 00:14:48.456 { 00:14:48.456 "name": "BaseBdev1", 00:14:48.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.456 "is_configured": false, 00:14:48.456 "data_offset": 0, 00:14:48.456 "data_size": 0 00:14:48.456 }, 00:14:48.456 { 00:14:48.456 "name": "BaseBdev2", 00:14:48.456 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:48.456 "is_configured": true, 00:14:48.456 "data_offset": 0, 00:14:48.456 "data_size": 65536 00:14:48.456 }, 00:14:48.456 { 00:14:48.456 "name": "BaseBdev3", 00:14:48.456 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:48.456 "is_configured": true, 00:14:48.456 "data_offset": 0, 00:14:48.456 "data_size": 65536 00:14:48.456 } 00:14:48.456 ] 00:14:48.456 }' 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.456 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 [2024-12-06 13:08:55.293931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.021 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.021 "name": "Existed_Raid", 00:14:49.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.021 "strip_size_kb": 64, 00:14:49.021 "state": "configuring", 00:14:49.021 "raid_level": "raid0", 00:14:49.021 "superblock": false, 00:14:49.021 "num_base_bdevs": 3, 00:14:49.021 "num_base_bdevs_discovered": 1, 00:14:49.021 "num_base_bdevs_operational": 3, 00:14:49.021 "base_bdevs_list": [ 00:14:49.021 { 00:14:49.021 "name": "BaseBdev1", 00:14:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.022 "is_configured": false, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 0 00:14:49.022 }, 00:14:49.022 { 00:14:49.022 "name": null, 00:14:49.022 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:49.022 "is_configured": false, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 65536 00:14:49.022 }, 00:14:49.022 { 00:14:49.022 "name": "BaseBdev3", 00:14:49.022 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:49.022 "is_configured": true, 00:14:49.022 "data_offset": 0, 00:14:49.022 "data_size": 65536 00:14:49.022 } 00:14:49.022 ] 00:14:49.022 }' 00:14:49.022 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.022 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.278 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.278 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.278 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.278 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 [2024-12-06 13:08:55.891872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.536 BaseBdev1 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.536 [ 00:14:49.536 { 00:14:49.536 "name": "BaseBdev1", 00:14:49.536 "aliases": [ 00:14:49.536 "58e0625a-50c8-4679-8d0d-bec4f9f409f5" 00:14:49.536 ], 00:14:49.536 "product_name": "Malloc disk", 00:14:49.536 "block_size": 512, 00:14:49.536 "num_blocks": 65536, 00:14:49.536 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:49.536 "assigned_rate_limits": { 00:14:49.536 "rw_ios_per_sec": 0, 00:14:49.536 "rw_mbytes_per_sec": 0, 00:14:49.536 "r_mbytes_per_sec": 0, 00:14:49.536 "w_mbytes_per_sec": 0 00:14:49.536 }, 00:14:49.536 "claimed": true, 00:14:49.536 "claim_type": "exclusive_write", 00:14:49.536 "zoned": false, 00:14:49.536 "supported_io_types": { 00:14:49.536 "read": true, 00:14:49.536 "write": true, 00:14:49.536 "unmap": true, 00:14:49.536 "flush": true, 00:14:49.536 "reset": true, 00:14:49.536 "nvme_admin": false, 00:14:49.536 "nvme_io": false, 00:14:49.536 "nvme_io_md": false, 00:14:49.536 "write_zeroes": true, 00:14:49.536 "zcopy": true, 00:14:49.536 "get_zone_info": false, 00:14:49.536 "zone_management": false, 00:14:49.536 "zone_append": false, 00:14:49.536 "compare": false, 00:14:49.536 "compare_and_write": false, 00:14:49.536 "abort": true, 00:14:49.536 "seek_hole": false, 00:14:49.536 "seek_data": false, 00:14:49.536 "copy": true, 00:14:49.536 "nvme_iov_md": false 00:14:49.536 }, 00:14:49.536 "memory_domains": [ 00:14:49.536 { 00:14:49.536 "dma_device_id": "system", 00:14:49.536 "dma_device_type": 1 00:14:49.536 }, 00:14:49.536 { 00:14:49.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.536 "dma_device_type": 2 00:14:49.536 } 00:14:49.536 ], 00:14:49.536 "driver_specific": {} 00:14:49.536 } 00:14:49.536 ] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.536 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.537 "name": "Existed_Raid", 00:14:49.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.537 "strip_size_kb": 64, 00:14:49.537 "state": "configuring", 00:14:49.537 "raid_level": "raid0", 00:14:49.537 "superblock": false, 00:14:49.537 "num_base_bdevs": 3, 00:14:49.537 "num_base_bdevs_discovered": 2, 00:14:49.537 "num_base_bdevs_operational": 3, 00:14:49.537 "base_bdevs_list": [ 00:14:49.537 { 00:14:49.537 "name": "BaseBdev1", 00:14:49.537 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:49.537 "is_configured": true, 00:14:49.537 "data_offset": 0, 00:14:49.537 "data_size": 65536 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "name": null, 00:14:49.537 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:49.537 "is_configured": false, 00:14:49.537 "data_offset": 0, 00:14:49.537 "data_size": 65536 00:14:49.537 }, 00:14:49.537 { 00:14:49.537 "name": "BaseBdev3", 00:14:49.537 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:49.537 "is_configured": true, 00:14:49.537 "data_offset": 0, 00:14:49.537 "data_size": 65536 00:14:49.537 } 00:14:49.537 ] 00:14:49.537 }' 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.537 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.102 [2024-12-06 13:08:56.484074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.102 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.103 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.103 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.103 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.103 "name": "Existed_Raid", 00:14:50.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.103 "strip_size_kb": 64, 00:14:50.103 "state": "configuring", 00:14:50.103 "raid_level": "raid0", 00:14:50.103 "superblock": false, 00:14:50.103 "num_base_bdevs": 3, 00:14:50.103 "num_base_bdevs_discovered": 1, 00:14:50.103 "num_base_bdevs_operational": 3, 00:14:50.103 "base_bdevs_list": [ 00:14:50.103 { 00:14:50.103 "name": "BaseBdev1", 00:14:50.103 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:50.103 "is_configured": true, 00:14:50.103 "data_offset": 0, 00:14:50.103 "data_size": 65536 00:14:50.103 }, 00:14:50.103 { 00:14:50.103 "name": null, 00:14:50.103 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:50.103 "is_configured": false, 00:14:50.103 "data_offset": 0, 00:14:50.103 "data_size": 65536 00:14:50.103 }, 00:14:50.103 { 00:14:50.103 "name": null, 00:14:50.103 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:50.103 "is_configured": false, 00:14:50.103 "data_offset": 0, 00:14:50.103 "data_size": 65536 00:14:50.103 } 00:14:50.103 ] 00:14:50.103 }' 00:14:50.103 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.103 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.669 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.669 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.670 [2024-12-06 13:08:57.060239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.670 "name": "Existed_Raid", 00:14:50.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.670 "strip_size_kb": 64, 00:14:50.670 "state": "configuring", 00:14:50.670 "raid_level": "raid0", 00:14:50.670 "superblock": false, 00:14:50.670 "num_base_bdevs": 3, 00:14:50.670 "num_base_bdevs_discovered": 2, 00:14:50.670 "num_base_bdevs_operational": 3, 00:14:50.670 "base_bdevs_list": [ 00:14:50.670 { 00:14:50.670 "name": "BaseBdev1", 00:14:50.670 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:50.670 "is_configured": true, 00:14:50.670 "data_offset": 0, 00:14:50.670 "data_size": 65536 00:14:50.670 }, 00:14:50.670 { 00:14:50.670 "name": null, 00:14:50.670 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:50.670 "is_configured": false, 00:14:50.670 "data_offset": 0, 00:14:50.670 "data_size": 65536 00:14:50.670 }, 00:14:50.670 { 00:14:50.670 "name": "BaseBdev3", 00:14:50.670 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:50.670 "is_configured": true, 00:14:50.670 "data_offset": 0, 00:14:50.670 "data_size": 65536 00:14:50.670 } 00:14:50.670 ] 00:14:50.670 }' 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.670 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.236 [2024-12-06 13:08:57.616473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.236 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.494 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.494 "name": "Existed_Raid", 00:14:51.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.494 "strip_size_kb": 64, 00:14:51.494 "state": "configuring", 00:14:51.494 "raid_level": "raid0", 00:14:51.494 "superblock": false, 00:14:51.494 "num_base_bdevs": 3, 00:14:51.494 "num_base_bdevs_discovered": 1, 00:14:51.494 "num_base_bdevs_operational": 3, 00:14:51.494 "base_bdevs_list": [ 00:14:51.494 { 00:14:51.494 "name": null, 00:14:51.494 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:51.494 "is_configured": false, 00:14:51.494 "data_offset": 0, 00:14:51.494 "data_size": 65536 00:14:51.494 }, 00:14:51.494 { 00:14:51.494 "name": null, 00:14:51.494 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:51.494 "is_configured": false, 00:14:51.494 "data_offset": 0, 00:14:51.494 "data_size": 65536 00:14:51.494 }, 00:14:51.494 { 00:14:51.494 "name": "BaseBdev3", 00:14:51.494 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:51.494 "is_configured": true, 00:14:51.494 "data_offset": 0, 00:14:51.494 "data_size": 65536 00:14:51.494 } 00:14:51.494 ] 00:14:51.494 }' 00:14:51.494 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.494 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.751 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 [2024-12-06 13:08:58.283137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.010 "name": "Existed_Raid", 00:14:52.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.010 "strip_size_kb": 64, 00:14:52.010 "state": "configuring", 00:14:52.010 "raid_level": "raid0", 00:14:52.010 "superblock": false, 00:14:52.010 "num_base_bdevs": 3, 00:14:52.010 "num_base_bdevs_discovered": 2, 00:14:52.010 "num_base_bdevs_operational": 3, 00:14:52.010 "base_bdevs_list": [ 00:14:52.010 { 00:14:52.010 "name": null, 00:14:52.010 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:52.010 "is_configured": false, 00:14:52.010 "data_offset": 0, 00:14:52.010 "data_size": 65536 00:14:52.010 }, 00:14:52.010 { 00:14:52.010 "name": "BaseBdev2", 00:14:52.010 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:52.010 "is_configured": true, 00:14:52.010 "data_offset": 0, 00:14:52.010 "data_size": 65536 00:14:52.010 }, 00:14:52.010 { 00:14:52.010 "name": "BaseBdev3", 00:14:52.010 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:52.010 "is_configured": true, 00:14:52.010 "data_offset": 0, 00:14:52.010 "data_size": 65536 00:14:52.010 } 00:14:52.010 ] 00:14:52.010 }' 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.010 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 58e0625a-50c8-4679-8d0d-bec4f9f409f5 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 [2024-12-06 13:08:58.960100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.578 [2024-12-06 13:08:58.960172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:52.578 [2024-12-06 13:08:58.960196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:52.578 [2024-12-06 13:08:58.961063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:52.578 [2024-12-06 13:08:58.961507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:52.578 [2024-12-06 13:08:58.961538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:52.578 [2024-12-06 13:08:58.962114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.578 NewBaseBdev 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.578 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 [ 00:14:52.578 { 00:14:52.578 "name": "NewBaseBdev", 00:14:52.578 "aliases": [ 00:14:52.578 "58e0625a-50c8-4679-8d0d-bec4f9f409f5" 00:14:52.578 ], 00:14:52.578 "product_name": "Malloc disk", 00:14:52.578 "block_size": 512, 00:14:52.578 "num_blocks": 65536, 00:14:52.578 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:52.578 "assigned_rate_limits": { 00:14:52.578 "rw_ios_per_sec": 0, 00:14:52.578 "rw_mbytes_per_sec": 0, 00:14:52.578 "r_mbytes_per_sec": 0, 00:14:52.578 "w_mbytes_per_sec": 0 00:14:52.578 }, 00:14:52.578 "claimed": true, 00:14:52.578 "claim_type": "exclusive_write", 00:14:52.578 "zoned": false, 00:14:52.578 "supported_io_types": { 00:14:52.578 "read": true, 00:14:52.578 "write": true, 00:14:52.578 "unmap": true, 00:14:52.578 "flush": true, 00:14:52.578 "reset": true, 00:14:52.578 "nvme_admin": false, 00:14:52.578 "nvme_io": false, 00:14:52.578 "nvme_io_md": false, 00:14:52.578 "write_zeroes": true, 00:14:52.578 "zcopy": true, 00:14:52.578 "get_zone_info": false, 00:14:52.578 "zone_management": false, 00:14:52.578 "zone_append": false, 00:14:52.578 "compare": false, 00:14:52.578 "compare_and_write": false, 00:14:52.578 "abort": true, 00:14:52.578 "seek_hole": false, 00:14:52.578 "seek_data": false, 00:14:52.578 "copy": true, 00:14:52.578 "nvme_iov_md": false 00:14:52.578 }, 00:14:52.578 "memory_domains": [ 00:14:52.578 { 00:14:52.578 "dma_device_id": "system", 00:14:52.578 "dma_device_type": 1 00:14:52.579 }, 00:14:52.579 { 00:14:52.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.579 "dma_device_type": 2 00:14:52.579 } 00:14:52.579 ], 00:14:52.579 "driver_specific": {} 00:14:52.579 } 00:14:52.579 ] 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.579 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.579 "name": "Existed_Raid", 00:14:52.579 "uuid": "2ab844ab-be5d-4d20-b5a3-91b6e1e68e4a", 00:14:52.579 "strip_size_kb": 64, 00:14:52.579 "state": "online", 00:14:52.579 "raid_level": "raid0", 00:14:52.579 "superblock": false, 00:14:52.579 "num_base_bdevs": 3, 00:14:52.579 "num_base_bdevs_discovered": 3, 00:14:52.579 "num_base_bdevs_operational": 3, 00:14:52.579 "base_bdevs_list": [ 00:14:52.579 { 00:14:52.579 "name": "NewBaseBdev", 00:14:52.579 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:52.579 "is_configured": true, 00:14:52.579 "data_offset": 0, 00:14:52.579 "data_size": 65536 00:14:52.579 }, 00:14:52.579 { 00:14:52.579 "name": "BaseBdev2", 00:14:52.579 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:52.579 "is_configured": true, 00:14:52.579 "data_offset": 0, 00:14:52.579 "data_size": 65536 00:14:52.579 }, 00:14:52.579 { 00:14:52.579 "name": "BaseBdev3", 00:14:52.579 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:52.579 "is_configured": true, 00:14:52.579 "data_offset": 0, 00:14:52.579 "data_size": 65536 00:14:52.579 } 00:14:52.579 ] 00:14:52.579 }' 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.579 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.146 [2024-12-06 13:08:59.508687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.146 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.146 "name": "Existed_Raid", 00:14:53.146 "aliases": [ 00:14:53.146 "2ab844ab-be5d-4d20-b5a3-91b6e1e68e4a" 00:14:53.146 ], 00:14:53.146 "product_name": "Raid Volume", 00:14:53.146 "block_size": 512, 00:14:53.146 "num_blocks": 196608, 00:14:53.146 "uuid": "2ab844ab-be5d-4d20-b5a3-91b6e1e68e4a", 00:14:53.146 "assigned_rate_limits": { 00:14:53.146 "rw_ios_per_sec": 0, 00:14:53.146 "rw_mbytes_per_sec": 0, 00:14:53.146 "r_mbytes_per_sec": 0, 00:14:53.146 "w_mbytes_per_sec": 0 00:14:53.146 }, 00:14:53.146 "claimed": false, 00:14:53.146 "zoned": false, 00:14:53.146 "supported_io_types": { 00:14:53.146 "read": true, 00:14:53.146 "write": true, 00:14:53.146 "unmap": true, 00:14:53.146 "flush": true, 00:14:53.146 "reset": true, 00:14:53.146 "nvme_admin": false, 00:14:53.146 "nvme_io": false, 00:14:53.146 "nvme_io_md": false, 00:14:53.146 "write_zeroes": true, 00:14:53.146 "zcopy": false, 00:14:53.146 "get_zone_info": false, 00:14:53.146 "zone_management": false, 00:14:53.146 "zone_append": false, 00:14:53.146 "compare": false, 00:14:53.146 "compare_and_write": false, 00:14:53.146 "abort": false, 00:14:53.146 "seek_hole": false, 00:14:53.146 "seek_data": false, 00:14:53.146 "copy": false, 00:14:53.146 "nvme_iov_md": false 00:14:53.146 }, 00:14:53.146 "memory_domains": [ 00:14:53.146 { 00:14:53.146 "dma_device_id": "system", 00:14:53.146 "dma_device_type": 1 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.146 "dma_device_type": 2 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "dma_device_id": "system", 00:14:53.146 "dma_device_type": 1 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.146 "dma_device_type": 2 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "dma_device_id": "system", 00:14:53.146 "dma_device_type": 1 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.146 "dma_device_type": 2 00:14:53.146 } 00:14:53.146 ], 00:14:53.146 "driver_specific": { 00:14:53.146 "raid": { 00:14:53.146 "uuid": "2ab844ab-be5d-4d20-b5a3-91b6e1e68e4a", 00:14:53.146 "strip_size_kb": 64, 00:14:53.146 "state": "online", 00:14:53.146 "raid_level": "raid0", 00:14:53.146 "superblock": false, 00:14:53.146 "num_base_bdevs": 3, 00:14:53.146 "num_base_bdevs_discovered": 3, 00:14:53.146 "num_base_bdevs_operational": 3, 00:14:53.146 "base_bdevs_list": [ 00:14:53.146 { 00:14:53.146 "name": "NewBaseBdev", 00:14:53.146 "uuid": "58e0625a-50c8-4679-8d0d-bec4f9f409f5", 00:14:53.146 "is_configured": true, 00:14:53.146 "data_offset": 0, 00:14:53.146 "data_size": 65536 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "name": "BaseBdev2", 00:14:53.146 "uuid": "8b7e430b-233e-402e-bc9e-71fe8633805b", 00:14:53.146 "is_configured": true, 00:14:53.146 "data_offset": 0, 00:14:53.146 "data_size": 65536 00:14:53.146 }, 00:14:53.146 { 00:14:53.146 "name": "BaseBdev3", 00:14:53.146 "uuid": "e4259554-5918-485c-a3bf-bd0476345c39", 00:14:53.146 "is_configured": true, 00:14:53.147 "data_offset": 0, 00:14:53.147 "data_size": 65536 00:14:53.147 } 00:14:53.147 ] 00:14:53.147 } 00:14:53.147 } 00:14:53.147 }' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:53.147 BaseBdev2 00:14:53.147 BaseBdev3' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.147 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 [2024-12-06 13:08:59.848389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.405 [2024-12-06 13:08:59.848441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.405 [2024-12-06 13:08:59.848569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.405 [2024-12-06 13:08:59.848648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.405 [2024-12-06 13:08:59.848669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64048 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64048 ']' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64048 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64048 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.405 killing process with pid 64048 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64048' 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64048 00:14:53.405 [2024-12-06 13:08:59.886071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.405 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64048 00:14:53.663 [2024-12-06 13:09:00.155973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.038 00:14:55.038 real 0m11.742s 00:14:55.038 user 0m19.353s 00:14:55.038 sys 0m1.703s 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.038 ************************************ 00:14:55.038 END TEST raid_state_function_test 00:14:55.038 ************************************ 00:14:55.038 13:09:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:55.038 13:09:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:55.038 13:09:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.038 13:09:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.038 ************************************ 00:14:55.038 START TEST raid_state_function_test_sb 00:14:55.038 ************************************ 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64686 00:14:55.038 Process raid pid: 64686 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64686' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64686 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64686 ']' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.038 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.038 [2024-12-06 13:09:01.400272] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:14:55.038 [2024-12-06 13:09:01.400492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.297 [2024-12-06 13:09:01.594545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.297 [2024-12-06 13:09:01.770195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.555 [2024-12-06 13:09:02.006779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.555 [2024-12-06 13:09:02.006835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.121 [2024-12-06 13:09:02.404661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.121 [2024-12-06 13:09:02.404739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.121 [2024-12-06 13:09:02.404758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.121 [2024-12-06 13:09:02.404777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.121 [2024-12-06 13:09:02.404788] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.121 [2024-12-06 13:09:02.404804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.121 "name": "Existed_Raid", 00:14:56.121 "uuid": "6b3a8822-ca31-49b9-9bbf-666ae6be56cd", 00:14:56.121 "strip_size_kb": 64, 00:14:56.121 "state": "configuring", 00:14:56.121 "raid_level": "raid0", 00:14:56.121 "superblock": true, 00:14:56.121 "num_base_bdevs": 3, 00:14:56.121 "num_base_bdevs_discovered": 0, 00:14:56.121 "num_base_bdevs_operational": 3, 00:14:56.121 "base_bdevs_list": [ 00:14:56.121 { 00:14:56.121 "name": "BaseBdev1", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.121 "is_configured": false, 00:14:56.121 "data_offset": 0, 00:14:56.121 "data_size": 0 00:14:56.121 }, 00:14:56.121 { 00:14:56.121 "name": "BaseBdev2", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.121 "is_configured": false, 00:14:56.121 "data_offset": 0, 00:14:56.121 "data_size": 0 00:14:56.121 }, 00:14:56.121 { 00:14:56.121 "name": "BaseBdev3", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.121 "is_configured": false, 00:14:56.121 "data_offset": 0, 00:14:56.121 "data_size": 0 00:14:56.121 } 00:14:56.121 ] 00:14:56.121 }' 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.121 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.802 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 [2024-12-06 13:09:02.932819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.803 [2024-12-06 13:09:02.932891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 [2024-12-06 13:09:02.940771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.803 [2024-12-06 13:09:02.940836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.803 [2024-12-06 13:09:02.940852] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.803 [2024-12-06 13:09:02.940870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.803 [2024-12-06 13:09:02.940881] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.803 [2024-12-06 13:09:02.940897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 [2024-12-06 13:09:02.990261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.803 BaseBdev1 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 [ 00:14:56.803 { 00:14:56.803 "name": "BaseBdev1", 00:14:56.803 "aliases": [ 00:14:56.803 "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7" 00:14:56.803 ], 00:14:56.803 "product_name": "Malloc disk", 00:14:56.803 "block_size": 512, 00:14:56.803 "num_blocks": 65536, 00:14:56.803 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:56.803 "assigned_rate_limits": { 00:14:56.803 "rw_ios_per_sec": 0, 00:14:56.803 "rw_mbytes_per_sec": 0, 00:14:56.803 "r_mbytes_per_sec": 0, 00:14:56.803 "w_mbytes_per_sec": 0 00:14:56.803 }, 00:14:56.803 "claimed": true, 00:14:56.803 "claim_type": "exclusive_write", 00:14:56.803 "zoned": false, 00:14:56.803 "supported_io_types": { 00:14:56.803 "read": true, 00:14:56.803 "write": true, 00:14:56.803 "unmap": true, 00:14:56.803 "flush": true, 00:14:56.803 "reset": true, 00:14:56.803 "nvme_admin": false, 00:14:56.803 "nvme_io": false, 00:14:56.803 "nvme_io_md": false, 00:14:56.803 "write_zeroes": true, 00:14:56.803 "zcopy": true, 00:14:56.803 "get_zone_info": false, 00:14:56.803 "zone_management": false, 00:14:56.803 "zone_append": false, 00:14:56.803 "compare": false, 00:14:56.803 "compare_and_write": false, 00:14:56.803 "abort": true, 00:14:56.803 "seek_hole": false, 00:14:56.803 "seek_data": false, 00:14:56.803 "copy": true, 00:14:56.803 "nvme_iov_md": false 00:14:56.803 }, 00:14:56.803 "memory_domains": [ 00:14:56.803 { 00:14:56.803 "dma_device_id": "system", 00:14:56.803 "dma_device_type": 1 00:14:56.803 }, 00:14:56.803 { 00:14:56.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.803 "dma_device_type": 2 00:14:56.803 } 00:14:56.803 ], 00:14:56.803 "driver_specific": {} 00:14:56.803 } 00:14:56.803 ] 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.803 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.803 "name": "Existed_Raid", 00:14:56.803 "uuid": "9ca0ed46-2ed2-4c44-947e-f0d65ac577e0", 00:14:56.803 "strip_size_kb": 64, 00:14:56.803 "state": "configuring", 00:14:56.803 "raid_level": "raid0", 00:14:56.803 "superblock": true, 00:14:56.803 "num_base_bdevs": 3, 00:14:56.803 "num_base_bdevs_discovered": 1, 00:14:56.803 "num_base_bdevs_operational": 3, 00:14:56.803 "base_bdevs_list": [ 00:14:56.803 { 00:14:56.803 "name": "BaseBdev1", 00:14:56.803 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:56.803 "is_configured": true, 00:14:56.803 "data_offset": 2048, 00:14:56.803 "data_size": 63488 00:14:56.803 }, 00:14:56.804 { 00:14:56.804 "name": "BaseBdev2", 00:14:56.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.804 "is_configured": false, 00:14:56.804 "data_offset": 0, 00:14:56.804 "data_size": 0 00:14:56.804 }, 00:14:56.804 { 00:14:56.804 "name": "BaseBdev3", 00:14:56.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.804 "is_configured": false, 00:14:56.804 "data_offset": 0, 00:14:56.804 "data_size": 0 00:14:56.804 } 00:14:56.804 ] 00:14:56.804 }' 00:14:56.804 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.804 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.062 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.062 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.062 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.062 [2024-12-06 13:09:03.542498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.063 [2024-12-06 13:09:03.542576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 [2024-12-06 13:09:03.550544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.063 [2024-12-06 13:09:03.553344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.063 [2024-12-06 13:09:03.553414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.063 [2024-12-06 13:09:03.553432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.063 [2024-12-06 13:09:03.553464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.063 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.322 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.322 "name": "Existed_Raid", 00:14:57.322 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:57.322 "strip_size_kb": 64, 00:14:57.322 "state": "configuring", 00:14:57.322 "raid_level": "raid0", 00:14:57.322 "superblock": true, 00:14:57.322 "num_base_bdevs": 3, 00:14:57.322 "num_base_bdevs_discovered": 1, 00:14:57.322 "num_base_bdevs_operational": 3, 00:14:57.322 "base_bdevs_list": [ 00:14:57.322 { 00:14:57.322 "name": "BaseBdev1", 00:14:57.322 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:57.322 "is_configured": true, 00:14:57.322 "data_offset": 2048, 00:14:57.322 "data_size": 63488 00:14:57.322 }, 00:14:57.322 { 00:14:57.322 "name": "BaseBdev2", 00:14:57.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.322 "is_configured": false, 00:14:57.322 "data_offset": 0, 00:14:57.322 "data_size": 0 00:14:57.322 }, 00:14:57.322 { 00:14:57.322 "name": "BaseBdev3", 00:14:57.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.322 "is_configured": false, 00:14:57.322 "data_offset": 0, 00:14:57.322 "data_size": 0 00:14:57.322 } 00:14:57.322 ] 00:14:57.322 }' 00:14:57.322 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.322 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 [2024-12-06 13:09:04.096910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.582 BaseBdev2 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.582 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 [ 00:14:57.841 { 00:14:57.841 "name": "BaseBdev2", 00:14:57.841 "aliases": [ 00:14:57.841 "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4" 00:14:57.841 ], 00:14:57.841 "product_name": "Malloc disk", 00:14:57.841 "block_size": 512, 00:14:57.841 "num_blocks": 65536, 00:14:57.841 "uuid": "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4", 00:14:57.841 "assigned_rate_limits": { 00:14:57.841 "rw_ios_per_sec": 0, 00:14:57.841 "rw_mbytes_per_sec": 0, 00:14:57.841 "r_mbytes_per_sec": 0, 00:14:57.841 "w_mbytes_per_sec": 0 00:14:57.841 }, 00:14:57.841 "claimed": true, 00:14:57.841 "claim_type": "exclusive_write", 00:14:57.841 "zoned": false, 00:14:57.841 "supported_io_types": { 00:14:57.841 "read": true, 00:14:57.841 "write": true, 00:14:57.841 "unmap": true, 00:14:57.841 "flush": true, 00:14:57.841 "reset": true, 00:14:57.841 "nvme_admin": false, 00:14:57.841 "nvme_io": false, 00:14:57.841 "nvme_io_md": false, 00:14:57.841 "write_zeroes": true, 00:14:57.841 "zcopy": true, 00:14:57.841 "get_zone_info": false, 00:14:57.841 "zone_management": false, 00:14:57.841 "zone_append": false, 00:14:57.841 "compare": false, 00:14:57.841 "compare_and_write": false, 00:14:57.841 "abort": true, 00:14:57.841 "seek_hole": false, 00:14:57.841 "seek_data": false, 00:14:57.841 "copy": true, 00:14:57.841 "nvme_iov_md": false 00:14:57.841 }, 00:14:57.841 "memory_domains": [ 00:14:57.841 { 00:14:57.841 "dma_device_id": "system", 00:14:57.841 "dma_device_type": 1 00:14:57.841 }, 00:14:57.841 { 00:14:57.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.841 "dma_device_type": 2 00:14:57.841 } 00:14:57.841 ], 00:14:57.841 "driver_specific": {} 00:14:57.841 } 00:14:57.841 ] 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.841 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.841 "name": "Existed_Raid", 00:14:57.841 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:57.841 "strip_size_kb": 64, 00:14:57.841 "state": "configuring", 00:14:57.841 "raid_level": "raid0", 00:14:57.841 "superblock": true, 00:14:57.841 "num_base_bdevs": 3, 00:14:57.841 "num_base_bdevs_discovered": 2, 00:14:57.841 "num_base_bdevs_operational": 3, 00:14:57.841 "base_bdevs_list": [ 00:14:57.841 { 00:14:57.842 "name": "BaseBdev1", 00:14:57.842 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:57.842 "is_configured": true, 00:14:57.842 "data_offset": 2048, 00:14:57.842 "data_size": 63488 00:14:57.842 }, 00:14:57.842 { 00:14:57.842 "name": "BaseBdev2", 00:14:57.842 "uuid": "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4", 00:14:57.842 "is_configured": true, 00:14:57.842 "data_offset": 2048, 00:14:57.842 "data_size": 63488 00:14:57.842 }, 00:14:57.842 { 00:14:57.842 "name": "BaseBdev3", 00:14:57.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.842 "is_configured": false, 00:14:57.842 "data_offset": 0, 00:14:57.842 "data_size": 0 00:14:57.842 } 00:14:57.842 ] 00:14:57.842 }' 00:14:57.842 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.842 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 [2024-12-06 13:09:04.687422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.411 [2024-12-06 13:09:04.687805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:58.411 [2024-12-06 13:09:04.687845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.411 [2024-12-06 13:09:04.688223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:58.411 BaseBdev3 00:14:58.411 [2024-12-06 13:09:04.688466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:58.411 [2024-12-06 13:09:04.688494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:58.411 [2024-12-06 13:09:04.688687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 [ 00:14:58.411 { 00:14:58.411 "name": "BaseBdev3", 00:14:58.411 "aliases": [ 00:14:58.411 "3b739548-7f93-4ee9-8ddc-e2960c7df338" 00:14:58.411 ], 00:14:58.411 "product_name": "Malloc disk", 00:14:58.411 "block_size": 512, 00:14:58.411 "num_blocks": 65536, 00:14:58.411 "uuid": "3b739548-7f93-4ee9-8ddc-e2960c7df338", 00:14:58.411 "assigned_rate_limits": { 00:14:58.411 "rw_ios_per_sec": 0, 00:14:58.411 "rw_mbytes_per_sec": 0, 00:14:58.411 "r_mbytes_per_sec": 0, 00:14:58.411 "w_mbytes_per_sec": 0 00:14:58.411 }, 00:14:58.411 "claimed": true, 00:14:58.411 "claim_type": "exclusive_write", 00:14:58.411 "zoned": false, 00:14:58.411 "supported_io_types": { 00:14:58.411 "read": true, 00:14:58.411 "write": true, 00:14:58.411 "unmap": true, 00:14:58.411 "flush": true, 00:14:58.411 "reset": true, 00:14:58.411 "nvme_admin": false, 00:14:58.411 "nvme_io": false, 00:14:58.411 "nvme_io_md": false, 00:14:58.411 "write_zeroes": true, 00:14:58.411 "zcopy": true, 00:14:58.411 "get_zone_info": false, 00:14:58.411 "zone_management": false, 00:14:58.411 "zone_append": false, 00:14:58.411 "compare": false, 00:14:58.411 "compare_and_write": false, 00:14:58.411 "abort": true, 00:14:58.411 "seek_hole": false, 00:14:58.411 "seek_data": false, 00:14:58.411 "copy": true, 00:14:58.411 "nvme_iov_md": false 00:14:58.411 }, 00:14:58.411 "memory_domains": [ 00:14:58.411 { 00:14:58.411 "dma_device_id": "system", 00:14:58.411 "dma_device_type": 1 00:14:58.411 }, 00:14:58.411 { 00:14:58.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.411 "dma_device_type": 2 00:14:58.411 } 00:14:58.411 ], 00:14:58.411 "driver_specific": {} 00:14:58.411 } 00:14:58.411 ] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.411 "name": "Existed_Raid", 00:14:58.411 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:58.411 "strip_size_kb": 64, 00:14:58.411 "state": "online", 00:14:58.411 "raid_level": "raid0", 00:14:58.411 "superblock": true, 00:14:58.411 "num_base_bdevs": 3, 00:14:58.411 "num_base_bdevs_discovered": 3, 00:14:58.411 "num_base_bdevs_operational": 3, 00:14:58.411 "base_bdevs_list": [ 00:14:58.411 { 00:14:58.411 "name": "BaseBdev1", 00:14:58.411 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:58.411 "is_configured": true, 00:14:58.411 "data_offset": 2048, 00:14:58.411 "data_size": 63488 00:14:58.411 }, 00:14:58.411 { 00:14:58.411 "name": "BaseBdev2", 00:14:58.411 "uuid": "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4", 00:14:58.411 "is_configured": true, 00:14:58.411 "data_offset": 2048, 00:14:58.411 "data_size": 63488 00:14:58.411 }, 00:14:58.411 { 00:14:58.411 "name": "BaseBdev3", 00:14:58.411 "uuid": "3b739548-7f93-4ee9-8ddc-e2960c7df338", 00:14:58.411 "is_configured": true, 00:14:58.411 "data_offset": 2048, 00:14:58.411 "data_size": 63488 00:14:58.411 } 00:14:58.411 ] 00:14:58.411 }' 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.411 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.991 [2024-12-06 13:09:05.272184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.991 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.991 "name": "Existed_Raid", 00:14:58.991 "aliases": [ 00:14:58.991 "023bc295-51b2-4888-9f0a-1b075b06a70e" 00:14:58.991 ], 00:14:58.991 "product_name": "Raid Volume", 00:14:58.992 "block_size": 512, 00:14:58.992 "num_blocks": 190464, 00:14:58.992 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:58.992 "assigned_rate_limits": { 00:14:58.992 "rw_ios_per_sec": 0, 00:14:58.992 "rw_mbytes_per_sec": 0, 00:14:58.992 "r_mbytes_per_sec": 0, 00:14:58.992 "w_mbytes_per_sec": 0 00:14:58.992 }, 00:14:58.992 "claimed": false, 00:14:58.992 "zoned": false, 00:14:58.992 "supported_io_types": { 00:14:58.992 "read": true, 00:14:58.992 "write": true, 00:14:58.992 "unmap": true, 00:14:58.992 "flush": true, 00:14:58.992 "reset": true, 00:14:58.992 "nvme_admin": false, 00:14:58.992 "nvme_io": false, 00:14:58.992 "nvme_io_md": false, 00:14:58.992 "write_zeroes": true, 00:14:58.992 "zcopy": false, 00:14:58.992 "get_zone_info": false, 00:14:58.992 "zone_management": false, 00:14:58.992 "zone_append": false, 00:14:58.992 "compare": false, 00:14:58.992 "compare_and_write": false, 00:14:58.992 "abort": false, 00:14:58.992 "seek_hole": false, 00:14:58.992 "seek_data": false, 00:14:58.992 "copy": false, 00:14:58.992 "nvme_iov_md": false 00:14:58.992 }, 00:14:58.992 "memory_domains": [ 00:14:58.992 { 00:14:58.992 "dma_device_id": "system", 00:14:58.992 "dma_device_type": 1 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.992 "dma_device_type": 2 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "dma_device_id": "system", 00:14:58.992 "dma_device_type": 1 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.992 "dma_device_type": 2 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "dma_device_id": "system", 00:14:58.992 "dma_device_type": 1 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.992 "dma_device_type": 2 00:14:58.992 } 00:14:58.992 ], 00:14:58.992 "driver_specific": { 00:14:58.992 "raid": { 00:14:58.992 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:58.992 "strip_size_kb": 64, 00:14:58.992 "state": "online", 00:14:58.992 "raid_level": "raid0", 00:14:58.992 "superblock": true, 00:14:58.992 "num_base_bdevs": 3, 00:14:58.992 "num_base_bdevs_discovered": 3, 00:14:58.992 "num_base_bdevs_operational": 3, 00:14:58.992 "base_bdevs_list": [ 00:14:58.992 { 00:14:58.992 "name": "BaseBdev1", 00:14:58.992 "uuid": "d0c0cb6b-1dd3-48c5-9c8f-581ffcf38ef7", 00:14:58.992 "is_configured": true, 00:14:58.992 "data_offset": 2048, 00:14:58.992 "data_size": 63488 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "name": "BaseBdev2", 00:14:58.992 "uuid": "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4", 00:14:58.992 "is_configured": true, 00:14:58.992 "data_offset": 2048, 00:14:58.992 "data_size": 63488 00:14:58.992 }, 00:14:58.992 { 00:14:58.992 "name": "BaseBdev3", 00:14:58.992 "uuid": "3b739548-7f93-4ee9-8ddc-e2960c7df338", 00:14:58.992 "is_configured": true, 00:14:58.992 "data_offset": 2048, 00:14:58.992 "data_size": 63488 00:14:58.992 } 00:14:58.992 ] 00:14:58.992 } 00:14:58.992 } 00:14:58.992 }' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.992 BaseBdev2 00:14:58.992 BaseBdev3' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.992 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.250 [2024-12-06 13:09:05.591966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.250 [2024-12-06 13:09:05.592009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.250 [2024-12-06 13:09:05.592120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.250 "name": "Existed_Raid", 00:14:59.250 "uuid": "023bc295-51b2-4888-9f0a-1b075b06a70e", 00:14:59.250 "strip_size_kb": 64, 00:14:59.250 "state": "offline", 00:14:59.250 "raid_level": "raid0", 00:14:59.250 "superblock": true, 00:14:59.250 "num_base_bdevs": 3, 00:14:59.250 "num_base_bdevs_discovered": 2, 00:14:59.250 "num_base_bdevs_operational": 2, 00:14:59.250 "base_bdevs_list": [ 00:14:59.250 { 00:14:59.250 "name": null, 00:14:59.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.250 "is_configured": false, 00:14:59.250 "data_offset": 0, 00:14:59.250 "data_size": 63488 00:14:59.250 }, 00:14:59.250 { 00:14:59.250 "name": "BaseBdev2", 00:14:59.250 "uuid": "9f1f8265-930b-4ef9-90ce-0f6d3b5c68b4", 00:14:59.250 "is_configured": true, 00:14:59.250 "data_offset": 2048, 00:14:59.250 "data_size": 63488 00:14:59.250 }, 00:14:59.250 { 00:14:59.250 "name": "BaseBdev3", 00:14:59.250 "uuid": "3b739548-7f93-4ee9-8ddc-e2960c7df338", 00:14:59.250 "is_configured": true, 00:14:59.250 "data_offset": 2048, 00:14:59.250 "data_size": 63488 00:14:59.250 } 00:14:59.250 ] 00:14:59.250 }' 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.250 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.816 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.816 [2024-12-06 13:09:06.281355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.075 [2024-12-06 13:09:06.442463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.075 [2024-12-06 13:09:06.442561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.075 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 BaseBdev2 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 [ 00:15:00.335 { 00:15:00.335 "name": "BaseBdev2", 00:15:00.335 "aliases": [ 00:15:00.335 "c4f7f7bc-a6b6-4759-a807-682d223c7ec5" 00:15:00.335 ], 00:15:00.335 "product_name": "Malloc disk", 00:15:00.335 "block_size": 512, 00:15:00.335 "num_blocks": 65536, 00:15:00.335 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:00.335 "assigned_rate_limits": { 00:15:00.335 "rw_ios_per_sec": 0, 00:15:00.335 "rw_mbytes_per_sec": 0, 00:15:00.335 "r_mbytes_per_sec": 0, 00:15:00.335 "w_mbytes_per_sec": 0 00:15:00.335 }, 00:15:00.335 "claimed": false, 00:15:00.335 "zoned": false, 00:15:00.335 "supported_io_types": { 00:15:00.335 "read": true, 00:15:00.335 "write": true, 00:15:00.335 "unmap": true, 00:15:00.335 "flush": true, 00:15:00.335 "reset": true, 00:15:00.335 "nvme_admin": false, 00:15:00.335 "nvme_io": false, 00:15:00.335 "nvme_io_md": false, 00:15:00.335 "write_zeroes": true, 00:15:00.335 "zcopy": true, 00:15:00.335 "get_zone_info": false, 00:15:00.335 "zone_management": false, 00:15:00.335 "zone_append": false, 00:15:00.335 "compare": false, 00:15:00.335 "compare_and_write": false, 00:15:00.335 "abort": true, 00:15:00.335 "seek_hole": false, 00:15:00.335 "seek_data": false, 00:15:00.335 "copy": true, 00:15:00.335 "nvme_iov_md": false 00:15:00.335 }, 00:15:00.335 "memory_domains": [ 00:15:00.335 { 00:15:00.335 "dma_device_id": "system", 00:15:00.335 "dma_device_type": 1 00:15:00.335 }, 00:15:00.335 { 00:15:00.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.335 "dma_device_type": 2 00:15:00.335 } 00:15:00.335 ], 00:15:00.335 "driver_specific": {} 00:15:00.335 } 00:15:00.335 ] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 BaseBdev3 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.335 [ 00:15:00.335 { 00:15:00.335 "name": "BaseBdev3", 00:15:00.335 "aliases": [ 00:15:00.335 "d6c91dae-035a-4eec-987d-5fb00673d392" 00:15:00.335 ], 00:15:00.335 "product_name": "Malloc disk", 00:15:00.335 "block_size": 512, 00:15:00.335 "num_blocks": 65536, 00:15:00.335 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:00.335 "assigned_rate_limits": { 00:15:00.335 "rw_ios_per_sec": 0, 00:15:00.335 "rw_mbytes_per_sec": 0, 00:15:00.335 "r_mbytes_per_sec": 0, 00:15:00.335 "w_mbytes_per_sec": 0 00:15:00.335 }, 00:15:00.335 "claimed": false, 00:15:00.335 "zoned": false, 00:15:00.335 "supported_io_types": { 00:15:00.335 "read": true, 00:15:00.335 "write": true, 00:15:00.335 "unmap": true, 00:15:00.335 "flush": true, 00:15:00.335 "reset": true, 00:15:00.335 "nvme_admin": false, 00:15:00.335 "nvme_io": false, 00:15:00.335 "nvme_io_md": false, 00:15:00.335 "write_zeroes": true, 00:15:00.335 "zcopy": true, 00:15:00.335 "get_zone_info": false, 00:15:00.335 "zone_management": false, 00:15:00.335 "zone_append": false, 00:15:00.335 "compare": false, 00:15:00.335 "compare_and_write": false, 00:15:00.335 "abort": true, 00:15:00.335 "seek_hole": false, 00:15:00.335 "seek_data": false, 00:15:00.335 "copy": true, 00:15:00.335 "nvme_iov_md": false 00:15:00.335 }, 00:15:00.335 "memory_domains": [ 00:15:00.335 { 00:15:00.335 "dma_device_id": "system", 00:15:00.335 "dma_device_type": 1 00:15:00.335 }, 00:15:00.335 { 00:15:00.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.335 "dma_device_type": 2 00:15:00.335 } 00:15:00.335 ], 00:15:00.335 "driver_specific": {} 00:15:00.335 } 00:15:00.335 ] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.335 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.336 [2024-12-06 13:09:06.743660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.336 [2024-12-06 13:09:06.743717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.336 [2024-12-06 13:09:06.743764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.336 [2024-12-06 13:09:06.746404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.336 "name": "Existed_Raid", 00:15:00.336 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:00.336 "strip_size_kb": 64, 00:15:00.336 "state": "configuring", 00:15:00.336 "raid_level": "raid0", 00:15:00.336 "superblock": true, 00:15:00.336 "num_base_bdevs": 3, 00:15:00.336 "num_base_bdevs_discovered": 2, 00:15:00.336 "num_base_bdevs_operational": 3, 00:15:00.336 "base_bdevs_list": [ 00:15:00.336 { 00:15:00.336 "name": "BaseBdev1", 00:15:00.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.336 "is_configured": false, 00:15:00.336 "data_offset": 0, 00:15:00.336 "data_size": 0 00:15:00.336 }, 00:15:00.336 { 00:15:00.336 "name": "BaseBdev2", 00:15:00.336 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:00.336 "is_configured": true, 00:15:00.336 "data_offset": 2048, 00:15:00.336 "data_size": 63488 00:15:00.336 }, 00:15:00.336 { 00:15:00.336 "name": "BaseBdev3", 00:15:00.336 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:00.336 "is_configured": true, 00:15:00.336 "data_offset": 2048, 00:15:00.336 "data_size": 63488 00:15:00.336 } 00:15:00.336 ] 00:15:00.336 }' 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.336 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.901 [2024-12-06 13:09:07.275929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.901 "name": "Existed_Raid", 00:15:00.901 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:00.901 "strip_size_kb": 64, 00:15:00.901 "state": "configuring", 00:15:00.901 "raid_level": "raid0", 00:15:00.901 "superblock": true, 00:15:00.901 "num_base_bdevs": 3, 00:15:00.901 "num_base_bdevs_discovered": 1, 00:15:00.901 "num_base_bdevs_operational": 3, 00:15:00.901 "base_bdevs_list": [ 00:15:00.901 { 00:15:00.901 "name": "BaseBdev1", 00:15:00.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.901 "is_configured": false, 00:15:00.901 "data_offset": 0, 00:15:00.901 "data_size": 0 00:15:00.901 }, 00:15:00.901 { 00:15:00.901 "name": null, 00:15:00.901 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:00.901 "is_configured": false, 00:15:00.901 "data_offset": 0, 00:15:00.901 "data_size": 63488 00:15:00.901 }, 00:15:00.901 { 00:15:00.901 "name": "BaseBdev3", 00:15:00.901 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:00.901 "is_configured": true, 00:15:00.901 "data_offset": 2048, 00:15:00.901 "data_size": 63488 00:15:00.901 } 00:15:00.901 ] 00:15:00.901 }' 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.901 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 [2024-12-06 13:09:07.879491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.468 BaseBdev1 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 [ 00:15:01.468 { 00:15:01.468 "name": "BaseBdev1", 00:15:01.468 "aliases": [ 00:15:01.468 "930ec24b-28d2-4971-a5f1-95b6048bff75" 00:15:01.468 ], 00:15:01.468 "product_name": "Malloc disk", 00:15:01.468 "block_size": 512, 00:15:01.468 "num_blocks": 65536, 00:15:01.468 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:01.468 "assigned_rate_limits": { 00:15:01.468 "rw_ios_per_sec": 0, 00:15:01.468 "rw_mbytes_per_sec": 0, 00:15:01.468 "r_mbytes_per_sec": 0, 00:15:01.468 "w_mbytes_per_sec": 0 00:15:01.468 }, 00:15:01.468 "claimed": true, 00:15:01.468 "claim_type": "exclusive_write", 00:15:01.468 "zoned": false, 00:15:01.468 "supported_io_types": { 00:15:01.468 "read": true, 00:15:01.468 "write": true, 00:15:01.468 "unmap": true, 00:15:01.468 "flush": true, 00:15:01.468 "reset": true, 00:15:01.468 "nvme_admin": false, 00:15:01.468 "nvme_io": false, 00:15:01.468 "nvme_io_md": false, 00:15:01.468 "write_zeroes": true, 00:15:01.468 "zcopy": true, 00:15:01.468 "get_zone_info": false, 00:15:01.468 "zone_management": false, 00:15:01.468 "zone_append": false, 00:15:01.468 "compare": false, 00:15:01.468 "compare_and_write": false, 00:15:01.468 "abort": true, 00:15:01.468 "seek_hole": false, 00:15:01.468 "seek_data": false, 00:15:01.468 "copy": true, 00:15:01.468 "nvme_iov_md": false 00:15:01.468 }, 00:15:01.468 "memory_domains": [ 00:15:01.468 { 00:15:01.468 "dma_device_id": "system", 00:15:01.468 "dma_device_type": 1 00:15:01.468 }, 00:15:01.468 { 00:15:01.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.468 "dma_device_type": 2 00:15:01.468 } 00:15:01.468 ], 00:15:01.468 "driver_specific": {} 00:15:01.468 } 00:15:01.468 ] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.468 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.468 "name": "Existed_Raid", 00:15:01.468 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:01.468 "strip_size_kb": 64, 00:15:01.469 "state": "configuring", 00:15:01.469 "raid_level": "raid0", 00:15:01.469 "superblock": true, 00:15:01.469 "num_base_bdevs": 3, 00:15:01.469 "num_base_bdevs_discovered": 2, 00:15:01.469 "num_base_bdevs_operational": 3, 00:15:01.469 "base_bdevs_list": [ 00:15:01.469 { 00:15:01.469 "name": "BaseBdev1", 00:15:01.469 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:01.469 "is_configured": true, 00:15:01.469 "data_offset": 2048, 00:15:01.469 "data_size": 63488 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "name": null, 00:15:01.469 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:01.469 "is_configured": false, 00:15:01.469 "data_offset": 0, 00:15:01.469 "data_size": 63488 00:15:01.469 }, 00:15:01.469 { 00:15:01.469 "name": "BaseBdev3", 00:15:01.469 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:01.469 "is_configured": true, 00:15:01.469 "data_offset": 2048, 00:15:01.469 "data_size": 63488 00:15:01.469 } 00:15:01.469 ] 00:15:01.469 }' 00:15:01.469 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.469 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.055 [2024-12-06 13:09:08.483775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.055 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.055 "name": "Existed_Raid", 00:15:02.055 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:02.055 "strip_size_kb": 64, 00:15:02.055 "state": "configuring", 00:15:02.055 "raid_level": "raid0", 00:15:02.055 "superblock": true, 00:15:02.055 "num_base_bdevs": 3, 00:15:02.055 "num_base_bdevs_discovered": 1, 00:15:02.055 "num_base_bdevs_operational": 3, 00:15:02.055 "base_bdevs_list": [ 00:15:02.056 { 00:15:02.056 "name": "BaseBdev1", 00:15:02.056 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:02.056 "is_configured": true, 00:15:02.056 "data_offset": 2048, 00:15:02.056 "data_size": 63488 00:15:02.056 }, 00:15:02.056 { 00:15:02.056 "name": null, 00:15:02.056 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:02.056 "is_configured": false, 00:15:02.056 "data_offset": 0, 00:15:02.056 "data_size": 63488 00:15:02.056 }, 00:15:02.056 { 00:15:02.056 "name": null, 00:15:02.056 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:02.056 "is_configured": false, 00:15:02.056 "data_offset": 0, 00:15:02.056 "data_size": 63488 00:15:02.056 } 00:15:02.056 ] 00:15:02.056 }' 00:15:02.056 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.056 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.621 [2024-12-06 13:09:09.108231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.621 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.622 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.879 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.879 "name": "Existed_Raid", 00:15:02.879 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:02.879 "strip_size_kb": 64, 00:15:02.879 "state": "configuring", 00:15:02.879 "raid_level": "raid0", 00:15:02.879 "superblock": true, 00:15:02.879 "num_base_bdevs": 3, 00:15:02.879 "num_base_bdevs_discovered": 2, 00:15:02.879 "num_base_bdevs_operational": 3, 00:15:02.879 "base_bdevs_list": [ 00:15:02.879 { 00:15:02.879 "name": "BaseBdev1", 00:15:02.879 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:02.879 "is_configured": true, 00:15:02.879 "data_offset": 2048, 00:15:02.879 "data_size": 63488 00:15:02.879 }, 00:15:02.879 { 00:15:02.879 "name": null, 00:15:02.879 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:02.879 "is_configured": false, 00:15:02.879 "data_offset": 0, 00:15:02.879 "data_size": 63488 00:15:02.879 }, 00:15:02.879 { 00:15:02.879 "name": "BaseBdev3", 00:15:02.879 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:02.879 "is_configured": true, 00:15:02.879 "data_offset": 2048, 00:15:02.879 "data_size": 63488 00:15:02.879 } 00:15:02.879 ] 00:15:02.879 }' 00:15:02.879 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.879 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.137 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.137 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.137 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.137 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.395 [2024-12-06 13:09:09.712344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.395 "name": "Existed_Raid", 00:15:03.395 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:03.395 "strip_size_kb": 64, 00:15:03.395 "state": "configuring", 00:15:03.395 "raid_level": "raid0", 00:15:03.395 "superblock": true, 00:15:03.395 "num_base_bdevs": 3, 00:15:03.395 "num_base_bdevs_discovered": 1, 00:15:03.395 "num_base_bdevs_operational": 3, 00:15:03.395 "base_bdevs_list": [ 00:15:03.395 { 00:15:03.395 "name": null, 00:15:03.395 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:03.395 "is_configured": false, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 63488 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "name": null, 00:15:03.395 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:03.395 "is_configured": false, 00:15:03.395 "data_offset": 0, 00:15:03.395 "data_size": 63488 00:15:03.395 }, 00:15:03.395 { 00:15:03.395 "name": "BaseBdev3", 00:15:03.395 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:03.395 "is_configured": true, 00:15:03.395 "data_offset": 2048, 00:15:03.395 "data_size": 63488 00:15:03.395 } 00:15:03.395 ] 00:15:03.395 }' 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.395 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.991 [2024-12-06 13:09:10.411362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.991 "name": "Existed_Raid", 00:15:03.991 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:03.991 "strip_size_kb": 64, 00:15:03.991 "state": "configuring", 00:15:03.991 "raid_level": "raid0", 00:15:03.991 "superblock": true, 00:15:03.991 "num_base_bdevs": 3, 00:15:03.991 "num_base_bdevs_discovered": 2, 00:15:03.991 "num_base_bdevs_operational": 3, 00:15:03.991 "base_bdevs_list": [ 00:15:03.991 { 00:15:03.991 "name": null, 00:15:03.991 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:03.991 "is_configured": false, 00:15:03.991 "data_offset": 0, 00:15:03.991 "data_size": 63488 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "name": "BaseBdev2", 00:15:03.991 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 }, 00:15:03.991 { 00:15:03.991 "name": "BaseBdev3", 00:15:03.991 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:03.991 "is_configured": true, 00:15:03.991 "data_offset": 2048, 00:15:03.991 "data_size": 63488 00:15:03.991 } 00:15:03.991 ] 00:15:03.991 }' 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.991 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.556 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 930ec24b-28d2-4971-a5f1-95b6048bff75 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.556 [2024-12-06 13:09:11.072326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.556 [2024-12-06 13:09:11.072733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:04.556 [2024-12-06 13:09:11.072775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:04.556 NewBaseBdev 00:15:04.556 [2024-12-06 13:09:11.073143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.556 [2024-12-06 13:09:11.073371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:04.556 [2024-12-06 13:09:11.073394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:04.556 [2024-12-06 13:09:11.073604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:04.556 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 [ 00:15:04.814 { 00:15:04.814 "name": "NewBaseBdev", 00:15:04.814 "aliases": [ 00:15:04.814 "930ec24b-28d2-4971-a5f1-95b6048bff75" 00:15:04.814 ], 00:15:04.814 "product_name": "Malloc disk", 00:15:04.814 "block_size": 512, 00:15:04.814 "num_blocks": 65536, 00:15:04.814 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:04.814 "assigned_rate_limits": { 00:15:04.814 "rw_ios_per_sec": 0, 00:15:04.814 "rw_mbytes_per_sec": 0, 00:15:04.814 "r_mbytes_per_sec": 0, 00:15:04.814 "w_mbytes_per_sec": 0 00:15:04.814 }, 00:15:04.814 "claimed": true, 00:15:04.814 "claim_type": "exclusive_write", 00:15:04.814 "zoned": false, 00:15:04.814 "supported_io_types": { 00:15:04.814 "read": true, 00:15:04.814 "write": true, 00:15:04.814 "unmap": true, 00:15:04.814 "flush": true, 00:15:04.814 "reset": true, 00:15:04.814 "nvme_admin": false, 00:15:04.814 "nvme_io": false, 00:15:04.814 "nvme_io_md": false, 00:15:04.814 "write_zeroes": true, 00:15:04.814 "zcopy": true, 00:15:04.814 "get_zone_info": false, 00:15:04.814 "zone_management": false, 00:15:04.814 "zone_append": false, 00:15:04.814 "compare": false, 00:15:04.814 "compare_and_write": false, 00:15:04.814 "abort": true, 00:15:04.814 "seek_hole": false, 00:15:04.814 "seek_data": false, 00:15:04.814 "copy": true, 00:15:04.814 "nvme_iov_md": false 00:15:04.814 }, 00:15:04.814 "memory_domains": [ 00:15:04.814 { 00:15:04.814 "dma_device_id": "system", 00:15:04.814 "dma_device_type": 1 00:15:04.814 }, 00:15:04.814 { 00:15:04.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.814 "dma_device_type": 2 00:15:04.814 } 00:15:04.814 ], 00:15:04.814 "driver_specific": {} 00:15:04.814 } 00:15:04.814 ] 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.814 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.814 "name": "Existed_Raid", 00:15:04.814 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:04.814 "strip_size_kb": 64, 00:15:04.814 "state": "online", 00:15:04.814 "raid_level": "raid0", 00:15:04.814 "superblock": true, 00:15:04.815 "num_base_bdevs": 3, 00:15:04.815 "num_base_bdevs_discovered": 3, 00:15:04.815 "num_base_bdevs_operational": 3, 00:15:04.815 "base_bdevs_list": [ 00:15:04.815 { 00:15:04.815 "name": "NewBaseBdev", 00:15:04.815 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:04.815 "is_configured": true, 00:15:04.815 "data_offset": 2048, 00:15:04.815 "data_size": 63488 00:15:04.815 }, 00:15:04.815 { 00:15:04.815 "name": "BaseBdev2", 00:15:04.815 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:04.815 "is_configured": true, 00:15:04.815 "data_offset": 2048, 00:15:04.815 "data_size": 63488 00:15:04.815 }, 00:15:04.815 { 00:15:04.815 "name": "BaseBdev3", 00:15:04.815 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:04.815 "is_configured": true, 00:15:04.815 "data_offset": 2048, 00:15:04.815 "data_size": 63488 00:15:04.815 } 00:15:04.815 ] 00:15:04.815 }' 00:15:04.815 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.815 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.381 [2024-12-06 13:09:11.613043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.381 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.381 "name": "Existed_Raid", 00:15:05.381 "aliases": [ 00:15:05.381 "61d4d6c8-c253-42e8-8334-7f14938aa9f9" 00:15:05.381 ], 00:15:05.381 "product_name": "Raid Volume", 00:15:05.381 "block_size": 512, 00:15:05.381 "num_blocks": 190464, 00:15:05.381 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:05.381 "assigned_rate_limits": { 00:15:05.381 "rw_ios_per_sec": 0, 00:15:05.381 "rw_mbytes_per_sec": 0, 00:15:05.381 "r_mbytes_per_sec": 0, 00:15:05.381 "w_mbytes_per_sec": 0 00:15:05.381 }, 00:15:05.381 "claimed": false, 00:15:05.381 "zoned": false, 00:15:05.381 "supported_io_types": { 00:15:05.381 "read": true, 00:15:05.381 "write": true, 00:15:05.381 "unmap": true, 00:15:05.381 "flush": true, 00:15:05.381 "reset": true, 00:15:05.381 "nvme_admin": false, 00:15:05.381 "nvme_io": false, 00:15:05.381 "nvme_io_md": false, 00:15:05.381 "write_zeroes": true, 00:15:05.381 "zcopy": false, 00:15:05.381 "get_zone_info": false, 00:15:05.381 "zone_management": false, 00:15:05.381 "zone_append": false, 00:15:05.381 "compare": false, 00:15:05.381 "compare_and_write": false, 00:15:05.381 "abort": false, 00:15:05.381 "seek_hole": false, 00:15:05.381 "seek_data": false, 00:15:05.382 "copy": false, 00:15:05.382 "nvme_iov_md": false 00:15:05.382 }, 00:15:05.382 "memory_domains": [ 00:15:05.382 { 00:15:05.382 "dma_device_id": "system", 00:15:05.382 "dma_device_type": 1 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.382 "dma_device_type": 2 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "dma_device_id": "system", 00:15:05.382 "dma_device_type": 1 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.382 "dma_device_type": 2 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "dma_device_id": "system", 00:15:05.382 "dma_device_type": 1 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.382 "dma_device_type": 2 00:15:05.382 } 00:15:05.382 ], 00:15:05.382 "driver_specific": { 00:15:05.382 "raid": { 00:15:05.382 "uuid": "61d4d6c8-c253-42e8-8334-7f14938aa9f9", 00:15:05.382 "strip_size_kb": 64, 00:15:05.382 "state": "online", 00:15:05.382 "raid_level": "raid0", 00:15:05.382 "superblock": true, 00:15:05.382 "num_base_bdevs": 3, 00:15:05.382 "num_base_bdevs_discovered": 3, 00:15:05.382 "num_base_bdevs_operational": 3, 00:15:05.382 "base_bdevs_list": [ 00:15:05.382 { 00:15:05.382 "name": "NewBaseBdev", 00:15:05.382 "uuid": "930ec24b-28d2-4971-a5f1-95b6048bff75", 00:15:05.382 "is_configured": true, 00:15:05.382 "data_offset": 2048, 00:15:05.382 "data_size": 63488 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "name": "BaseBdev2", 00:15:05.382 "uuid": "c4f7f7bc-a6b6-4759-a807-682d223c7ec5", 00:15:05.382 "is_configured": true, 00:15:05.382 "data_offset": 2048, 00:15:05.382 "data_size": 63488 00:15:05.382 }, 00:15:05.382 { 00:15:05.382 "name": "BaseBdev3", 00:15:05.382 "uuid": "d6c91dae-035a-4eec-987d-5fb00673d392", 00:15:05.382 "is_configured": true, 00:15:05.382 "data_offset": 2048, 00:15:05.382 "data_size": 63488 00:15:05.382 } 00:15:05.382 ] 00:15:05.382 } 00:15:05.382 } 00:15:05.382 }' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:05.382 BaseBdev2 00:15:05.382 BaseBdev3' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.382 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.640 [2024-12-06 13:09:11.936689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.640 [2024-12-06 13:09:11.936744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.640 [2024-12-06 13:09:11.936887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.640 [2024-12-06 13:09:11.936961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.640 [2024-12-06 13:09:11.937013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64686 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64686 ']' 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64686 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64686 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.640 killing process with pid 64686 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64686' 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64686 00:15:05.640 [2024-12-06 13:09:11.978693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.640 13:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64686 00:15:05.898 [2024-12-06 13:09:12.242270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.833 13:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:06.833 00:15:06.833 real 0m12.063s 00:15:06.833 user 0m19.956s 00:15:06.833 sys 0m1.689s 00:15:06.833 13:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.833 13:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.833 ************************************ 00:15:06.833 END TEST raid_state_function_test_sb 00:15:06.833 ************************************ 00:15:07.092 13:09:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:07.092 13:09:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:07.092 13:09:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.092 13:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.092 ************************************ 00:15:07.092 START TEST raid_superblock_test 00:15:07.092 ************************************ 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65327 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65327 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65327 ']' 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.092 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.092 [2024-12-06 13:09:13.520162] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:07.092 [2024-12-06 13:09:13.520355] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65327 ] 00:15:07.350 [2024-12-06 13:09:13.723825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.609 [2024-12-06 13:09:13.899161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.609 [2024-12-06 13:09:14.132712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.609 [2024-12-06 13:09:14.132779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 malloc1 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 [2024-12-06 13:09:14.571919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.176 [2024-12-06 13:09:14.572018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.176 [2024-12-06 13:09:14.572072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.176 [2024-12-06 13:09:14.572088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.176 [2024-12-06 13:09:14.575501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.176 pt1 00:15:08.176 [2024-12-06 13:09:14.575708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.176 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.176 malloc2 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.177 [2024-12-06 13:09:14.632883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.177 [2024-12-06 13:09:14.632951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.177 [2024-12-06 13:09:14.632994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.177 [2024-12-06 13:09:14.633007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.177 [2024-12-06 13:09:14.636575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.177 [2024-12-06 13:09:14.636618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.177 pt2 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.177 malloc3 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.177 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.435 [2024-12-06 13:09:14.704499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.435 [2024-12-06 13:09:14.704710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.435 [2024-12-06 13:09:14.704791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.435 [2024-12-06 13:09:14.704913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.435 [2024-12-06 13:09:14.708570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.435 [2024-12-06 13:09:14.708745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.435 pt3 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.435 [2024-12-06 13:09:14.717197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:08.435 [2024-12-06 13:09:14.719971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.435 [2024-12-06 13:09:14.720082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:08.435 [2024-12-06 13:09:14.720313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.435 [2024-12-06 13:09:14.720334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.435 [2024-12-06 13:09:14.720721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.435 [2024-12-06 13:09:14.720959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.435 [2024-12-06 13:09:14.720974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.435 [2024-12-06 13:09:14.721245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.435 "name": "raid_bdev1", 00:15:08.435 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:08.435 "strip_size_kb": 64, 00:15:08.435 "state": "online", 00:15:08.435 "raid_level": "raid0", 00:15:08.435 "superblock": true, 00:15:08.435 "num_base_bdevs": 3, 00:15:08.435 "num_base_bdevs_discovered": 3, 00:15:08.435 "num_base_bdevs_operational": 3, 00:15:08.435 "base_bdevs_list": [ 00:15:08.435 { 00:15:08.435 "name": "pt1", 00:15:08.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.435 "is_configured": true, 00:15:08.435 "data_offset": 2048, 00:15:08.435 "data_size": 63488 00:15:08.435 }, 00:15:08.435 { 00:15:08.435 "name": "pt2", 00:15:08.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.435 "is_configured": true, 00:15:08.435 "data_offset": 2048, 00:15:08.435 "data_size": 63488 00:15:08.435 }, 00:15:08.435 { 00:15:08.435 "name": "pt3", 00:15:08.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.435 "is_configured": true, 00:15:08.435 "data_offset": 2048, 00:15:08.435 "data_size": 63488 00:15:08.435 } 00:15:08.435 ] 00:15:08.435 }' 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.435 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.002 [2024-12-06 13:09:15.257914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.002 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.002 "name": "raid_bdev1", 00:15:09.002 "aliases": [ 00:15:09.002 "a211554f-ee08-4f97-9400-2599b94ae11e" 00:15:09.002 ], 00:15:09.002 "product_name": "Raid Volume", 00:15:09.002 "block_size": 512, 00:15:09.002 "num_blocks": 190464, 00:15:09.002 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:09.002 "assigned_rate_limits": { 00:15:09.002 "rw_ios_per_sec": 0, 00:15:09.002 "rw_mbytes_per_sec": 0, 00:15:09.002 "r_mbytes_per_sec": 0, 00:15:09.002 "w_mbytes_per_sec": 0 00:15:09.002 }, 00:15:09.002 "claimed": false, 00:15:09.002 "zoned": false, 00:15:09.002 "supported_io_types": { 00:15:09.002 "read": true, 00:15:09.002 "write": true, 00:15:09.003 "unmap": true, 00:15:09.003 "flush": true, 00:15:09.003 "reset": true, 00:15:09.003 "nvme_admin": false, 00:15:09.003 "nvme_io": false, 00:15:09.003 "nvme_io_md": false, 00:15:09.003 "write_zeroes": true, 00:15:09.003 "zcopy": false, 00:15:09.003 "get_zone_info": false, 00:15:09.003 "zone_management": false, 00:15:09.003 "zone_append": false, 00:15:09.003 "compare": false, 00:15:09.003 "compare_and_write": false, 00:15:09.003 "abort": false, 00:15:09.003 "seek_hole": false, 00:15:09.003 "seek_data": false, 00:15:09.003 "copy": false, 00:15:09.003 "nvme_iov_md": false 00:15:09.003 }, 00:15:09.003 "memory_domains": [ 00:15:09.003 { 00:15:09.003 "dma_device_id": "system", 00:15:09.003 "dma_device_type": 1 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.003 "dma_device_type": 2 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "dma_device_id": "system", 00:15:09.003 "dma_device_type": 1 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.003 "dma_device_type": 2 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "dma_device_id": "system", 00:15:09.003 "dma_device_type": 1 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.003 "dma_device_type": 2 00:15:09.003 } 00:15:09.003 ], 00:15:09.003 "driver_specific": { 00:15:09.003 "raid": { 00:15:09.003 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:09.003 "strip_size_kb": 64, 00:15:09.003 "state": "online", 00:15:09.003 "raid_level": "raid0", 00:15:09.003 "superblock": true, 00:15:09.003 "num_base_bdevs": 3, 00:15:09.003 "num_base_bdevs_discovered": 3, 00:15:09.003 "num_base_bdevs_operational": 3, 00:15:09.003 "base_bdevs_list": [ 00:15:09.003 { 00:15:09.003 "name": "pt1", 00:15:09.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 2048, 00:15:09.003 "data_size": 63488 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "pt2", 00:15:09.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 2048, 00:15:09.003 "data_size": 63488 00:15:09.003 }, 00:15:09.003 { 00:15:09.003 "name": "pt3", 00:15:09.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.003 "is_configured": true, 00:15:09.003 "data_offset": 2048, 00:15:09.003 "data_size": 63488 00:15:09.003 } 00:15:09.003 ] 00:15:09.003 } 00:15:09.003 } 00:15:09.003 }' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:09.003 pt2 00:15:09.003 pt3' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.003 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 [2024-12-06 13:09:15.597943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a211554f-ee08-4f97-9400-2599b94ae11e 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a211554f-ee08-4f97-9400-2599b94ae11e ']' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 [2024-12-06 13:09:15.649503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.262 [2024-12-06 13:09:15.649556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.262 [2024-12-06 13:09:15.649677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.262 [2024-12-06 13:09:15.649771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.262 [2024-12-06 13:09:15.649788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.262 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.522 [2024-12-06 13:09:15.793677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.522 [2024-12-06 13:09:15.796693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.522 [2024-12-06 13:09:15.796892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.522 [2024-12-06 13:09:15.797133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.522 [2024-12-06 13:09:15.797343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.522 [2024-12-06 13:09:15.797634] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.522 [2024-12-06 13:09:15.797787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.522 [2024-12-06 13:09:15.797946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:09.522 request: 00:15:09.522 { 00:15:09.522 "name": "raid_bdev1", 00:15:09.522 "raid_level": "raid0", 00:15:09.522 "base_bdevs": [ 00:15:09.522 "malloc1", 00:15:09.522 "malloc2", 00:15:09.522 "malloc3" 00:15:09.522 ], 00:15:09.522 "strip_size_kb": 64, 00:15:09.522 "superblock": false, 00:15:09.522 "method": "bdev_raid_create", 00:15:09.522 "req_id": 1 00:15:09.522 } 00:15:09.522 Got JSON-RPC error response 00:15:09.522 response: 00:15:09.522 { 00:15:09.522 "code": -17, 00:15:09.522 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.522 } 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.522 [2024-12-06 13:09:15.862366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.522 [2024-12-06 13:09:15.862468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.522 [2024-12-06 13:09:15.862506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:09.522 [2024-12-06 13:09:15.862522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.522 [2024-12-06 13:09:15.865903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.522 [2024-12-06 13:09:15.865961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.522 [2024-12-06 13:09:15.866080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.522 [2024-12-06 13:09:15.866179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.522 pt1 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.522 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.523 "name": "raid_bdev1", 00:15:09.523 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:09.523 "strip_size_kb": 64, 00:15:09.523 "state": "configuring", 00:15:09.523 "raid_level": "raid0", 00:15:09.523 "superblock": true, 00:15:09.523 "num_base_bdevs": 3, 00:15:09.523 "num_base_bdevs_discovered": 1, 00:15:09.523 "num_base_bdevs_operational": 3, 00:15:09.523 "base_bdevs_list": [ 00:15:09.523 { 00:15:09.523 "name": "pt1", 00:15:09.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.523 "is_configured": true, 00:15:09.523 "data_offset": 2048, 00:15:09.523 "data_size": 63488 00:15:09.523 }, 00:15:09.523 { 00:15:09.523 "name": null, 00:15:09.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.523 "is_configured": false, 00:15:09.523 "data_offset": 2048, 00:15:09.523 "data_size": 63488 00:15:09.523 }, 00:15:09.523 { 00:15:09.523 "name": null, 00:15:09.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.523 "is_configured": false, 00:15:09.523 "data_offset": 2048, 00:15:09.523 "data_size": 63488 00:15:09.523 } 00:15:09.523 ] 00:15:09.523 }' 00:15:09.523 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.523 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.091 [2024-12-06 13:09:16.398707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.091 [2024-12-06 13:09:16.398982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.091 [2024-12-06 13:09:16.399031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.091 [2024-12-06 13:09:16.399048] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.091 [2024-12-06 13:09:16.399774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.091 [2024-12-06 13:09:16.399822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.091 [2024-12-06 13:09:16.400015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.091 [2024-12-06 13:09:16.400064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.091 pt2 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.091 [2024-12-06 13:09:16.406603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.091 "name": "raid_bdev1", 00:15:10.091 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:10.091 "strip_size_kb": 64, 00:15:10.091 "state": "configuring", 00:15:10.091 "raid_level": "raid0", 00:15:10.091 "superblock": true, 00:15:10.091 "num_base_bdevs": 3, 00:15:10.091 "num_base_bdevs_discovered": 1, 00:15:10.091 "num_base_bdevs_operational": 3, 00:15:10.091 "base_bdevs_list": [ 00:15:10.091 { 00:15:10.091 "name": "pt1", 00:15:10.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.091 "is_configured": true, 00:15:10.091 "data_offset": 2048, 00:15:10.091 "data_size": 63488 00:15:10.091 }, 00:15:10.091 { 00:15:10.091 "name": null, 00:15:10.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.091 "is_configured": false, 00:15:10.091 "data_offset": 0, 00:15:10.091 "data_size": 63488 00:15:10.091 }, 00:15:10.091 { 00:15:10.091 "name": null, 00:15:10.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.091 "is_configured": false, 00:15:10.091 "data_offset": 2048, 00:15:10.091 "data_size": 63488 00:15:10.091 } 00:15:10.091 ] 00:15:10.091 }' 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.091 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.660 [2024-12-06 13:09:16.950823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.660 [2024-12-06 13:09:16.950918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.660 [2024-12-06 13:09:16.950948] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:10.660 [2024-12-06 13:09:16.950966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.660 [2024-12-06 13:09:16.951664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.660 [2024-12-06 13:09:16.951697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.660 [2024-12-06 13:09:16.951813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.660 [2024-12-06 13:09:16.951883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.660 pt2 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.660 [2024-12-06 13:09:16.958729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.660 [2024-12-06 13:09:16.958815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.660 [2024-12-06 13:09:16.958851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:10.660 [2024-12-06 13:09:16.958867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.660 [2024-12-06 13:09:16.959358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.660 [2024-12-06 13:09:16.959399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.660 [2024-12-06 13:09:16.959494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.660 [2024-12-06 13:09:16.959529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.660 [2024-12-06 13:09:16.959681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.660 [2024-12-06 13:09:16.959709] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:10.660 [2024-12-06 13:09:16.960060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.660 [2024-12-06 13:09:16.960271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.660 [2024-12-06 13:09:16.960285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:10.660 [2024-12-06 13:09:16.960479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.660 pt3 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.660 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.660 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.660 "name": "raid_bdev1", 00:15:10.660 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:10.660 "strip_size_kb": 64, 00:15:10.660 "state": "online", 00:15:10.660 "raid_level": "raid0", 00:15:10.660 "superblock": true, 00:15:10.660 "num_base_bdevs": 3, 00:15:10.660 "num_base_bdevs_discovered": 3, 00:15:10.660 "num_base_bdevs_operational": 3, 00:15:10.660 "base_bdevs_list": [ 00:15:10.660 { 00:15:10.660 "name": "pt1", 00:15:10.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.660 "is_configured": true, 00:15:10.660 "data_offset": 2048, 00:15:10.660 "data_size": 63488 00:15:10.660 }, 00:15:10.660 { 00:15:10.660 "name": "pt2", 00:15:10.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.660 "is_configured": true, 00:15:10.660 "data_offset": 2048, 00:15:10.660 "data_size": 63488 00:15:10.660 }, 00:15:10.660 { 00:15:10.660 "name": "pt3", 00:15:10.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.660 "is_configured": true, 00:15:10.660 "data_offset": 2048, 00:15:10.660 "data_size": 63488 00:15:10.660 } 00:15:10.660 ] 00:15:10.660 }' 00:15:10.660 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.660 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.266 [2024-12-06 13:09:17.495417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.266 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.266 "name": "raid_bdev1", 00:15:11.266 "aliases": [ 00:15:11.266 "a211554f-ee08-4f97-9400-2599b94ae11e" 00:15:11.266 ], 00:15:11.266 "product_name": "Raid Volume", 00:15:11.266 "block_size": 512, 00:15:11.266 "num_blocks": 190464, 00:15:11.266 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:11.266 "assigned_rate_limits": { 00:15:11.266 "rw_ios_per_sec": 0, 00:15:11.266 "rw_mbytes_per_sec": 0, 00:15:11.266 "r_mbytes_per_sec": 0, 00:15:11.266 "w_mbytes_per_sec": 0 00:15:11.266 }, 00:15:11.266 "claimed": false, 00:15:11.266 "zoned": false, 00:15:11.266 "supported_io_types": { 00:15:11.266 "read": true, 00:15:11.266 "write": true, 00:15:11.266 "unmap": true, 00:15:11.266 "flush": true, 00:15:11.266 "reset": true, 00:15:11.266 "nvme_admin": false, 00:15:11.266 "nvme_io": false, 00:15:11.266 "nvme_io_md": false, 00:15:11.266 "write_zeroes": true, 00:15:11.266 "zcopy": false, 00:15:11.266 "get_zone_info": false, 00:15:11.266 "zone_management": false, 00:15:11.266 "zone_append": false, 00:15:11.266 "compare": false, 00:15:11.266 "compare_and_write": false, 00:15:11.266 "abort": false, 00:15:11.266 "seek_hole": false, 00:15:11.266 "seek_data": false, 00:15:11.266 "copy": false, 00:15:11.266 "nvme_iov_md": false 00:15:11.266 }, 00:15:11.266 "memory_domains": [ 00:15:11.266 { 00:15:11.266 "dma_device_id": "system", 00:15:11.266 "dma_device_type": 1 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.266 "dma_device_type": 2 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "dma_device_id": "system", 00:15:11.266 "dma_device_type": 1 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.266 "dma_device_type": 2 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "dma_device_id": "system", 00:15:11.266 "dma_device_type": 1 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.266 "dma_device_type": 2 00:15:11.266 } 00:15:11.266 ], 00:15:11.266 "driver_specific": { 00:15:11.266 "raid": { 00:15:11.266 "uuid": "a211554f-ee08-4f97-9400-2599b94ae11e", 00:15:11.266 "strip_size_kb": 64, 00:15:11.266 "state": "online", 00:15:11.266 "raid_level": "raid0", 00:15:11.266 "superblock": true, 00:15:11.266 "num_base_bdevs": 3, 00:15:11.266 "num_base_bdevs_discovered": 3, 00:15:11.266 "num_base_bdevs_operational": 3, 00:15:11.266 "base_bdevs_list": [ 00:15:11.266 { 00:15:11.266 "name": "pt1", 00:15:11.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.266 "is_configured": true, 00:15:11.266 "data_offset": 2048, 00:15:11.266 "data_size": 63488 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "name": "pt2", 00:15:11.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.266 "is_configured": true, 00:15:11.266 "data_offset": 2048, 00:15:11.266 "data_size": 63488 00:15:11.266 }, 00:15:11.266 { 00:15:11.266 "name": "pt3", 00:15:11.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.266 "is_configured": true, 00:15:11.266 "data_offset": 2048, 00:15:11.266 "data_size": 63488 00:15:11.266 } 00:15:11.266 ] 00:15:11.267 } 00:15:11.267 } 00:15:11.267 }' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.267 pt2 00:15:11.267 pt3' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.267 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:11.525 [2024-12-06 13:09:17.835483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a211554f-ee08-4f97-9400-2599b94ae11e '!=' a211554f-ee08-4f97-9400-2599b94ae11e ']' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65327 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65327 ']' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65327 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65327 00:15:11.525 killing process with pid 65327 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65327' 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65327 00:15:11.525 [2024-12-06 13:09:17.921401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.525 13:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65327 00:15:11.525 [2024-12-06 13:09:17.921610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.525 [2024-12-06 13:09:17.921702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.525 [2024-12-06 13:09:17.921724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:11.785 [2024-12-06 13:09:18.201845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.162 13:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:13.162 00:15:13.162 real 0m5.912s 00:15:13.162 user 0m8.787s 00:15:13.162 sys 0m0.970s 00:15:13.162 13:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.162 13:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.162 ************************************ 00:15:13.162 END TEST raid_superblock_test 00:15:13.162 ************************************ 00:15:13.162 13:09:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:15:13.162 13:09:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.162 13:09:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.162 13:09:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.162 ************************************ 00:15:13.162 START TEST raid_read_error_test 00:15:13.162 ************************************ 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9ZrTnROrWe 00:15:13.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65581 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65581 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65581 ']' 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.162 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.163 [2024-12-06 13:09:19.513299] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:13.163 [2024-12-06 13:09:19.513903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65581 ] 00:15:13.422 [2024-12-06 13:09:19.704673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.422 [2024-12-06 13:09:19.854580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.680 [2024-12-06 13:09:20.081575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.680 [2024-12-06 13:09:20.081664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.247 BaseBdev1_malloc 00:15:14.247 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 true 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 [2024-12-06 13:09:20.595076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.248 [2024-12-06 13:09:20.595320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.248 [2024-12-06 13:09:20.595364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:14.248 [2024-12-06 13:09:20.595385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.248 [2024-12-06 13:09:20.598537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.248 [2024-12-06 13:09:20.598589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.248 BaseBdev1 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 BaseBdev2_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 true 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 [2024-12-06 13:09:20.659269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:14.248 [2024-12-06 13:09:20.659514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.248 [2024-12-06 13:09:20.659564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:14.248 [2024-12-06 13:09:20.659584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.248 [2024-12-06 13:09:20.662625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.248 [2024-12-06 13:09:20.662675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.248 BaseBdev2 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 BaseBdev3_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 true 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 [2024-12-06 13:09:20.728688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:14.248 [2024-12-06 13:09:20.728767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.248 [2024-12-06 13:09:20.728830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:14.248 [2024-12-06 13:09:20.728849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.248 [2024-12-06 13:09:20.732005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.248 [2024-12-06 13:09:20.732068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:14.248 BaseBdev3 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 [2024-12-06 13:09:20.736787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.248 [2024-12-06 13:09:20.739494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.248 [2024-12-06 13:09:20.739821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.248 [2024-12-06 13:09:20.740119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:14.248 [2024-12-06 13:09:20.740141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:14.248 [2024-12-06 13:09:20.740574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:14.248 [2024-12-06 13:09:20.740799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:14.248 [2024-12-06 13:09:20.740822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:14.248 [2024-12-06 13:09:20.741098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.248 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.507 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.507 "name": "raid_bdev1", 00:15:14.507 "uuid": "b03bc37c-53bf-4f78-910f-375027f442b0", 00:15:14.507 "strip_size_kb": 64, 00:15:14.507 "state": "online", 00:15:14.507 "raid_level": "raid0", 00:15:14.507 "superblock": true, 00:15:14.507 "num_base_bdevs": 3, 00:15:14.507 "num_base_bdevs_discovered": 3, 00:15:14.507 "num_base_bdevs_operational": 3, 00:15:14.507 "base_bdevs_list": [ 00:15:14.507 { 00:15:14.507 "name": "BaseBdev1", 00:15:14.507 "uuid": "622e7565-59f8-5304-889d-6a489f93a224", 00:15:14.507 "is_configured": true, 00:15:14.507 "data_offset": 2048, 00:15:14.507 "data_size": 63488 00:15:14.507 }, 00:15:14.507 { 00:15:14.507 "name": "BaseBdev2", 00:15:14.507 "uuid": "da51a970-f577-5185-af05-577887dfd6ae", 00:15:14.507 "is_configured": true, 00:15:14.507 "data_offset": 2048, 00:15:14.507 "data_size": 63488 00:15:14.507 }, 00:15:14.507 { 00:15:14.507 "name": "BaseBdev3", 00:15:14.507 "uuid": "c184c550-e681-5dc6-8114-326791fa2de5", 00:15:14.507 "is_configured": true, 00:15:14.507 "data_offset": 2048, 00:15:14.507 "data_size": 63488 00:15:14.507 } 00:15:14.507 ] 00:15:14.507 }' 00:15:14.507 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.507 13:09:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.765 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:14.765 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:15.023 [2024-12-06 13:09:21.394796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.958 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.959 "name": "raid_bdev1", 00:15:15.959 "uuid": "b03bc37c-53bf-4f78-910f-375027f442b0", 00:15:15.959 "strip_size_kb": 64, 00:15:15.959 "state": "online", 00:15:15.959 "raid_level": "raid0", 00:15:15.959 "superblock": true, 00:15:15.959 "num_base_bdevs": 3, 00:15:15.959 "num_base_bdevs_discovered": 3, 00:15:15.959 "num_base_bdevs_operational": 3, 00:15:15.959 "base_bdevs_list": [ 00:15:15.959 { 00:15:15.959 "name": "BaseBdev1", 00:15:15.959 "uuid": "622e7565-59f8-5304-889d-6a489f93a224", 00:15:15.959 "is_configured": true, 00:15:15.959 "data_offset": 2048, 00:15:15.959 "data_size": 63488 00:15:15.959 }, 00:15:15.959 { 00:15:15.959 "name": "BaseBdev2", 00:15:15.959 "uuid": "da51a970-f577-5185-af05-577887dfd6ae", 00:15:15.959 "is_configured": true, 00:15:15.959 "data_offset": 2048, 00:15:15.959 "data_size": 63488 00:15:15.959 }, 00:15:15.959 { 00:15:15.959 "name": "BaseBdev3", 00:15:15.959 "uuid": "c184c550-e681-5dc6-8114-326791fa2de5", 00:15:15.959 "is_configured": true, 00:15:15.959 "data_offset": 2048, 00:15:15.959 "data_size": 63488 00:15:15.959 } 00:15:15.959 ] 00:15:15.959 }' 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.959 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.526 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.526 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.526 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.526 [2024-12-06 13:09:22.815839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.526 [2024-12-06 13:09:22.815926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.526 [2024-12-06 13:09:22.819440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.526 [2024-12-06 13:09:22.819696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.526 [2024-12-06 13:09:22.819773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.526 [2024-12-06 13:09:22.819790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:16.526 { 00:15:16.526 "results": [ 00:15:16.526 { 00:15:16.526 "job": "raid_bdev1", 00:15:16.526 "core_mask": "0x1", 00:15:16.526 "workload": "randrw", 00:15:16.527 "percentage": 50, 00:15:16.527 "status": "finished", 00:15:16.527 "queue_depth": 1, 00:15:16.527 "io_size": 131072, 00:15:16.527 "runtime": 1.418262, 00:15:16.527 "iops": 9627.981289775797, 00:15:16.527 "mibps": 1203.4976612219746, 00:15:16.527 "io_failed": 1, 00:15:16.527 "io_timeout": 0, 00:15:16.527 "avg_latency_us": 145.88724503381798, 00:15:16.527 "min_latency_us": 40.96, 00:15:16.527 "max_latency_us": 2040.5527272727272 00:15:16.527 } 00:15:16.527 ], 00:15:16.527 "core_count": 1 00:15:16.527 } 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65581 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65581 ']' 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65581 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65581 00:15:16.527 killing process with pid 65581 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65581' 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65581 00:15:16.527 13:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65581 00:15:16.527 [2024-12-06 13:09:22.856463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.785 [2024-12-06 13:09:23.071901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.158 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9ZrTnROrWe 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:18.159 00:15:18.159 real 0m4.896s 00:15:18.159 user 0m6.007s 00:15:18.159 sys 0m0.684s 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.159 ************************************ 00:15:18.159 END TEST raid_read_error_test 00:15:18.159 ************************************ 00:15:18.159 13:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.159 13:09:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:15:18.159 13:09:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.159 13:09:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.159 13:09:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.159 ************************************ 00:15:18.159 START TEST raid_write_error_test 00:15:18.159 ************************************ 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BPuEvg2L4p 00:15:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65732 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65732 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65732 ']' 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.159 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.159 [2024-12-06 13:09:24.473293] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:18.159 [2024-12-06 13:09:24.473606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65732 ] 00:15:18.159 [2024-12-06 13:09:24.667070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.421 [2024-12-06 13:09:24.829477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.683 [2024-12-06 13:09:25.048513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.683 [2024-12-06 13:09:25.048913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.954 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 BaseBdev1_malloc 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 true 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.212 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 [2024-12-06 13:09:25.497626] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:19.212 [2024-12-06 13:09:25.497701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.212 [2024-12-06 13:09:25.497731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:19.213 [2024-12-06 13:09:25.497748] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.213 [2024-12-06 13:09:25.500764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.213 [2024-12-06 13:09:25.500833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.213 BaseBdev1 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 BaseBdev2_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 true 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 [2024-12-06 13:09:25.560094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:19.213 [2024-12-06 13:09:25.560166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.213 [2024-12-06 13:09:25.560191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:19.213 [2024-12-06 13:09:25.560208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.213 [2024-12-06 13:09:25.563135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.213 [2024-12-06 13:09:25.563354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.213 BaseBdev2 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 BaseBdev3_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 true 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 [2024-12-06 13:09:25.637641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:19.213 [2024-12-06 13:09:25.637728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.213 [2024-12-06 13:09:25.637758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:19.213 [2024-12-06 13:09:25.637776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.213 [2024-12-06 13:09:25.640904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.213 [2024-12-06 13:09:25.641096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.213 BaseBdev3 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 [2024-12-06 13:09:25.645833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.213 [2024-12-06 13:09:25.648652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.213 [2024-12-06 13:09:25.648767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.213 [2024-12-06 13:09:25.649075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:19.213 [2024-12-06 13:09:25.649094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:19.213 [2024-12-06 13:09:25.649387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:19.213 [2024-12-06 13:09:25.649706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:19.213 [2024-12-06 13:09:25.649730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:19.213 [2024-12-06 13:09:25.650050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.213 "name": "raid_bdev1", 00:15:19.213 "uuid": "e627520f-85bc-4d03-b6e7-18e33f3b81eb", 00:15:19.213 "strip_size_kb": 64, 00:15:19.213 "state": "online", 00:15:19.213 "raid_level": "raid0", 00:15:19.213 "superblock": true, 00:15:19.213 "num_base_bdevs": 3, 00:15:19.213 "num_base_bdevs_discovered": 3, 00:15:19.213 "num_base_bdevs_operational": 3, 00:15:19.213 "base_bdevs_list": [ 00:15:19.213 { 00:15:19.213 "name": "BaseBdev1", 00:15:19.213 "uuid": "ed8a243c-edc4-5ae9-a7dc-79452225c4e2", 00:15:19.213 "is_configured": true, 00:15:19.213 "data_offset": 2048, 00:15:19.213 "data_size": 63488 00:15:19.213 }, 00:15:19.213 { 00:15:19.213 "name": "BaseBdev2", 00:15:19.213 "uuid": "62f0138d-f4d3-5800-b5e9-1bb6ca218232", 00:15:19.213 "is_configured": true, 00:15:19.213 "data_offset": 2048, 00:15:19.213 "data_size": 63488 00:15:19.213 }, 00:15:19.213 { 00:15:19.213 "name": "BaseBdev3", 00:15:19.213 "uuid": "8bad0ed6-7d25-5675-8dff-7b79faf9c01e", 00:15:19.213 "is_configured": true, 00:15:19.213 "data_offset": 2048, 00:15:19.213 "data_size": 63488 00:15:19.213 } 00:15:19.213 ] 00:15:19.213 }' 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.213 13:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.777 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:19.777 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:19.777 [2024-12-06 13:09:26.287810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.717 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.976 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.976 "name": "raid_bdev1", 00:15:20.976 "uuid": "e627520f-85bc-4d03-b6e7-18e33f3b81eb", 00:15:20.976 "strip_size_kb": 64, 00:15:20.976 "state": "online", 00:15:20.976 "raid_level": "raid0", 00:15:20.976 "superblock": true, 00:15:20.976 "num_base_bdevs": 3, 00:15:20.976 "num_base_bdevs_discovered": 3, 00:15:20.976 "num_base_bdevs_operational": 3, 00:15:20.976 "base_bdevs_list": [ 00:15:20.976 { 00:15:20.976 "name": "BaseBdev1", 00:15:20.976 "uuid": "ed8a243c-edc4-5ae9-a7dc-79452225c4e2", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 2048, 00:15:20.976 "data_size": 63488 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "name": "BaseBdev2", 00:15:20.976 "uuid": "62f0138d-f4d3-5800-b5e9-1bb6ca218232", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 2048, 00:15:20.976 "data_size": 63488 00:15:20.976 }, 00:15:20.976 { 00:15:20.976 "name": "BaseBdev3", 00:15:20.976 "uuid": "8bad0ed6-7d25-5675-8dff-7b79faf9c01e", 00:15:20.976 "is_configured": true, 00:15:20.976 "data_offset": 2048, 00:15:20.976 "data_size": 63488 00:15:20.976 } 00:15:20.976 ] 00:15:20.976 }' 00:15:20.976 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.976 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.235 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.235 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.235 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.236 [2024-12-06 13:09:27.738639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.236 [2024-12-06 13:09:27.738698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.236 [2024-12-06 13:09:27.743116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.236 { 00:15:21.236 "results": [ 00:15:21.236 { 00:15:21.236 "job": "raid_bdev1", 00:15:21.236 "core_mask": "0x1", 00:15:21.236 "workload": "randrw", 00:15:21.236 "percentage": 50, 00:15:21.236 "status": "finished", 00:15:21.236 "queue_depth": 1, 00:15:21.236 "io_size": 131072, 00:15:21.236 "runtime": 1.448177, 00:15:21.236 "iops": 9512.64935156407, 00:15:21.236 "mibps": 1189.0811689455088, 00:15:21.236 "io_failed": 1, 00:15:21.236 "io_timeout": 0, 00:15:21.236 "avg_latency_us": 147.41620645740264, 00:15:21.236 "min_latency_us": 30.72, 00:15:21.236 "max_latency_us": 1995.8690909090908 00:15:21.236 } 00:15:21.236 ], 00:15:21.236 "core_count": 1 00:15:21.236 } 00:15:21.236 [2024-12-06 13:09:27.743440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.236 [2024-12-06 13:09:27.743545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.236 [2024-12-06 13:09:27.743567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65732 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65732 ']' 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65732 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.236 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65732 00:15:21.495 killing process with pid 65732 00:15:21.495 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.495 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.495 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65732' 00:15:21.495 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65732 00:15:21.495 13:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65732 00:15:21.495 [2024-12-06 13:09:27.779620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.495 [2024-12-06 13:09:28.018589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BPuEvg2L4p 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:15:22.906 00:15:22.906 real 0m4.902s 00:15:22.906 user 0m5.970s 00:15:22.906 sys 0m0.659s 00:15:22.906 ************************************ 00:15:22.906 END TEST raid_write_error_test 00:15:22.906 ************************************ 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.906 13:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.906 13:09:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:22.906 13:09:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:22.906 13:09:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.906 13:09:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.906 13:09:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.906 ************************************ 00:15:22.906 START TEST raid_state_function_test 00:15:22.906 ************************************ 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.906 Process raid pid: 65877 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65877 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65877' 00:15:22.906 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65877 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65877 ']' 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.907 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.907 [2024-12-06 13:09:29.397311] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:22.907 [2024-12-06 13:09:29.397539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.171 [2024-12-06 13:09:29.573124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.439 [2024-12-06 13:09:29.723480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.439 [2024-12-06 13:09:29.953791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.439 [2024-12-06 13:09:29.953847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 [2024-12-06 13:09:30.488352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.027 [2024-12-06 13:09:30.488424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.027 [2024-12-06 13:09:30.488440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.027 [2024-12-06 13:09:30.488486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.027 [2024-12-06 13:09:30.488497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.027 [2024-12-06 13:09:30.488512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.027 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.027 "name": "Existed_Raid", 00:15:24.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.027 "strip_size_kb": 64, 00:15:24.027 "state": "configuring", 00:15:24.027 "raid_level": "concat", 00:15:24.027 "superblock": false, 00:15:24.027 "num_base_bdevs": 3, 00:15:24.027 "num_base_bdevs_discovered": 0, 00:15:24.027 "num_base_bdevs_operational": 3, 00:15:24.027 "base_bdevs_list": [ 00:15:24.027 { 00:15:24.027 "name": "BaseBdev1", 00:15:24.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.027 "is_configured": false, 00:15:24.027 "data_offset": 0, 00:15:24.027 "data_size": 0 00:15:24.027 }, 00:15:24.027 { 00:15:24.027 "name": "BaseBdev2", 00:15:24.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.028 "is_configured": false, 00:15:24.028 "data_offset": 0, 00:15:24.028 "data_size": 0 00:15:24.028 }, 00:15:24.028 { 00:15:24.028 "name": "BaseBdev3", 00:15:24.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.028 "is_configured": false, 00:15:24.028 "data_offset": 0, 00:15:24.028 "data_size": 0 00:15:24.028 } 00:15:24.028 ] 00:15:24.028 }' 00:15:24.028 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.028 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 [2024-12-06 13:09:31.020506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.626 [2024-12-06 13:09:31.020567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 [2024-12-06 13:09:31.032481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.626 [2024-12-06 13:09:31.032702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.626 [2024-12-06 13:09:31.032729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.626 [2024-12-06 13:09:31.032749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.626 [2024-12-06 13:09:31.032759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.626 [2024-12-06 13:09:31.032775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 [2024-12-06 13:09:31.082556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.626 BaseBdev1 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 [ 00:15:24.626 { 00:15:24.626 "name": "BaseBdev1", 00:15:24.626 "aliases": [ 00:15:24.626 "557ba091-44f9-4617-8255-0159fea1c026" 00:15:24.626 ], 00:15:24.626 "product_name": "Malloc disk", 00:15:24.626 "block_size": 512, 00:15:24.626 "num_blocks": 65536, 00:15:24.626 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:24.626 "assigned_rate_limits": { 00:15:24.626 "rw_ios_per_sec": 0, 00:15:24.626 "rw_mbytes_per_sec": 0, 00:15:24.626 "r_mbytes_per_sec": 0, 00:15:24.626 "w_mbytes_per_sec": 0 00:15:24.626 }, 00:15:24.626 "claimed": true, 00:15:24.626 "claim_type": "exclusive_write", 00:15:24.626 "zoned": false, 00:15:24.626 "supported_io_types": { 00:15:24.626 "read": true, 00:15:24.626 "write": true, 00:15:24.626 "unmap": true, 00:15:24.626 "flush": true, 00:15:24.626 "reset": true, 00:15:24.626 "nvme_admin": false, 00:15:24.626 "nvme_io": false, 00:15:24.626 "nvme_io_md": false, 00:15:24.626 "write_zeroes": true, 00:15:24.626 "zcopy": true, 00:15:24.626 "get_zone_info": false, 00:15:24.626 "zone_management": false, 00:15:24.626 "zone_append": false, 00:15:24.626 "compare": false, 00:15:24.626 "compare_and_write": false, 00:15:24.626 "abort": true, 00:15:24.626 "seek_hole": false, 00:15:24.626 "seek_data": false, 00:15:24.626 "copy": true, 00:15:24.626 "nvme_iov_md": false 00:15:24.626 }, 00:15:24.626 "memory_domains": [ 00:15:24.626 { 00:15:24.626 "dma_device_id": "system", 00:15:24.626 "dma_device_type": 1 00:15:24.626 }, 00:15:24.626 { 00:15:24.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.626 "dma_device_type": 2 00:15:24.626 } 00:15:24.626 ], 00:15:24.626 "driver_specific": {} 00:15:24.626 } 00:15:24.626 ] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.626 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.627 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.627 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.627 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.627 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.889 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.889 "name": "Existed_Raid", 00:15:24.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.889 "strip_size_kb": 64, 00:15:24.889 "state": "configuring", 00:15:24.889 "raid_level": "concat", 00:15:24.889 "superblock": false, 00:15:24.889 "num_base_bdevs": 3, 00:15:24.889 "num_base_bdevs_discovered": 1, 00:15:24.890 "num_base_bdevs_operational": 3, 00:15:24.890 "base_bdevs_list": [ 00:15:24.890 { 00:15:24.890 "name": "BaseBdev1", 00:15:24.890 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:24.890 "is_configured": true, 00:15:24.890 "data_offset": 0, 00:15:24.890 "data_size": 65536 00:15:24.890 }, 00:15:24.890 { 00:15:24.890 "name": "BaseBdev2", 00:15:24.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.890 "is_configured": false, 00:15:24.890 "data_offset": 0, 00:15:24.890 "data_size": 0 00:15:24.890 }, 00:15:24.890 { 00:15:24.890 "name": "BaseBdev3", 00:15:24.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.890 "is_configured": false, 00:15:24.890 "data_offset": 0, 00:15:24.890 "data_size": 0 00:15:24.890 } 00:15:24.890 ] 00:15:24.890 }' 00:15:24.890 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.890 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.149 [2024-12-06 13:09:31.634836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.149 [2024-12-06 13:09:31.635067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.149 [2024-12-06 13:09:31.646868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.149 [2024-12-06 13:09:31.649656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.149 [2024-12-06 13:09:31.649832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.149 [2024-12-06 13:09:31.649961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.149 [2024-12-06 13:09:31.650021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.149 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.408 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.408 "name": "Existed_Raid", 00:15:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.408 "strip_size_kb": 64, 00:15:25.408 "state": "configuring", 00:15:25.408 "raid_level": "concat", 00:15:25.408 "superblock": false, 00:15:25.408 "num_base_bdevs": 3, 00:15:25.408 "num_base_bdevs_discovered": 1, 00:15:25.408 "num_base_bdevs_operational": 3, 00:15:25.408 "base_bdevs_list": [ 00:15:25.408 { 00:15:25.408 "name": "BaseBdev1", 00:15:25.408 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:25.408 "is_configured": true, 00:15:25.408 "data_offset": 0, 00:15:25.408 "data_size": 65536 00:15:25.408 }, 00:15:25.408 { 00:15:25.408 "name": "BaseBdev2", 00:15:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.408 "is_configured": false, 00:15:25.408 "data_offset": 0, 00:15:25.408 "data_size": 0 00:15:25.408 }, 00:15:25.408 { 00:15:25.408 "name": "BaseBdev3", 00:15:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.408 "is_configured": false, 00:15:25.408 "data_offset": 0, 00:15:25.408 "data_size": 0 00:15:25.408 } 00:15:25.408 ] 00:15:25.408 }' 00:15:25.408 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.408 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.667 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.667 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.667 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 [2024-12-06 13:09:32.221266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.925 BaseBdev2 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.925 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 [ 00:15:25.925 { 00:15:25.925 "name": "BaseBdev2", 00:15:25.925 "aliases": [ 00:15:25.925 "d61c2ccd-9acd-4485-947a-1bd5e5c76143" 00:15:25.925 ], 00:15:25.925 "product_name": "Malloc disk", 00:15:25.925 "block_size": 512, 00:15:25.925 "num_blocks": 65536, 00:15:25.926 "uuid": "d61c2ccd-9acd-4485-947a-1bd5e5c76143", 00:15:25.926 "assigned_rate_limits": { 00:15:25.926 "rw_ios_per_sec": 0, 00:15:25.926 "rw_mbytes_per_sec": 0, 00:15:25.926 "r_mbytes_per_sec": 0, 00:15:25.926 "w_mbytes_per_sec": 0 00:15:25.926 }, 00:15:25.926 "claimed": true, 00:15:25.926 "claim_type": "exclusive_write", 00:15:25.926 "zoned": false, 00:15:25.926 "supported_io_types": { 00:15:25.926 "read": true, 00:15:25.926 "write": true, 00:15:25.926 "unmap": true, 00:15:25.926 "flush": true, 00:15:25.926 "reset": true, 00:15:25.926 "nvme_admin": false, 00:15:25.926 "nvme_io": false, 00:15:25.926 "nvme_io_md": false, 00:15:25.926 "write_zeroes": true, 00:15:25.926 "zcopy": true, 00:15:25.926 "get_zone_info": false, 00:15:25.926 "zone_management": false, 00:15:25.926 "zone_append": false, 00:15:25.926 "compare": false, 00:15:25.926 "compare_and_write": false, 00:15:25.926 "abort": true, 00:15:25.926 "seek_hole": false, 00:15:25.926 "seek_data": false, 00:15:25.926 "copy": true, 00:15:25.926 "nvme_iov_md": false 00:15:25.926 }, 00:15:25.926 "memory_domains": [ 00:15:25.926 { 00:15:25.926 "dma_device_id": "system", 00:15:25.926 "dma_device_type": 1 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.926 "dma_device_type": 2 00:15:25.926 } 00:15:25.926 ], 00:15:25.926 "driver_specific": {} 00:15:25.926 } 00:15:25.926 ] 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.926 "name": "Existed_Raid", 00:15:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.926 "strip_size_kb": 64, 00:15:25.926 "state": "configuring", 00:15:25.926 "raid_level": "concat", 00:15:25.926 "superblock": false, 00:15:25.926 "num_base_bdevs": 3, 00:15:25.926 "num_base_bdevs_discovered": 2, 00:15:25.926 "num_base_bdevs_operational": 3, 00:15:25.926 "base_bdevs_list": [ 00:15:25.926 { 00:15:25.926 "name": "BaseBdev1", 00:15:25.926 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:25.926 "is_configured": true, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 65536 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "name": "BaseBdev2", 00:15:25.926 "uuid": "d61c2ccd-9acd-4485-947a-1bd5e5c76143", 00:15:25.926 "is_configured": true, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 65536 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "name": "BaseBdev3", 00:15:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.926 "is_configured": false, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 0 00:15:25.926 } 00:15:25.926 ] 00:15:25.926 }' 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.926 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.494 [2024-12-06 13:09:32.830013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.494 [2024-12-06 13:09:32.830263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:26.494 [2024-12-06 13:09:32.830344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:26.494 [2024-12-06 13:09:32.830848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:26.494 [2024-12-06 13:09:32.831502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:26.494 [2024-12-06 13:09:32.831664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:26.494 [2024-12-06 13:09:32.832030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.494 BaseBdev3 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.494 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.494 [ 00:15:26.494 { 00:15:26.494 "name": "BaseBdev3", 00:15:26.494 "aliases": [ 00:15:26.494 "71518757-d576-40d2-82ae-f4bd7adc3263" 00:15:26.494 ], 00:15:26.494 "product_name": "Malloc disk", 00:15:26.494 "block_size": 512, 00:15:26.494 "num_blocks": 65536, 00:15:26.494 "uuid": "71518757-d576-40d2-82ae-f4bd7adc3263", 00:15:26.494 "assigned_rate_limits": { 00:15:26.494 "rw_ios_per_sec": 0, 00:15:26.494 "rw_mbytes_per_sec": 0, 00:15:26.494 "r_mbytes_per_sec": 0, 00:15:26.494 "w_mbytes_per_sec": 0 00:15:26.494 }, 00:15:26.494 "claimed": true, 00:15:26.494 "claim_type": "exclusive_write", 00:15:26.494 "zoned": false, 00:15:26.494 "supported_io_types": { 00:15:26.494 "read": true, 00:15:26.494 "write": true, 00:15:26.494 "unmap": true, 00:15:26.494 "flush": true, 00:15:26.494 "reset": true, 00:15:26.494 "nvme_admin": false, 00:15:26.494 "nvme_io": false, 00:15:26.494 "nvme_io_md": false, 00:15:26.494 "write_zeroes": true, 00:15:26.494 "zcopy": true, 00:15:26.494 "get_zone_info": false, 00:15:26.494 "zone_management": false, 00:15:26.494 "zone_append": false, 00:15:26.494 "compare": false, 00:15:26.494 "compare_and_write": false, 00:15:26.494 "abort": true, 00:15:26.494 "seek_hole": false, 00:15:26.494 "seek_data": false, 00:15:26.494 "copy": true, 00:15:26.494 "nvme_iov_md": false 00:15:26.494 }, 00:15:26.494 "memory_domains": [ 00:15:26.494 { 00:15:26.494 "dma_device_id": "system", 00:15:26.494 "dma_device_type": 1 00:15:26.494 }, 00:15:26.495 { 00:15:26.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.495 "dma_device_type": 2 00:15:26.495 } 00:15:26.495 ], 00:15:26.495 "driver_specific": {} 00:15:26.495 } 00:15:26.495 ] 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.495 "name": "Existed_Raid", 00:15:26.495 "uuid": "cd4a8384-2556-4847-9795-c10318869aec", 00:15:26.495 "strip_size_kb": 64, 00:15:26.495 "state": "online", 00:15:26.495 "raid_level": "concat", 00:15:26.495 "superblock": false, 00:15:26.495 "num_base_bdevs": 3, 00:15:26.495 "num_base_bdevs_discovered": 3, 00:15:26.495 "num_base_bdevs_operational": 3, 00:15:26.495 "base_bdevs_list": [ 00:15:26.495 { 00:15:26.495 "name": "BaseBdev1", 00:15:26.495 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:26.495 "is_configured": true, 00:15:26.495 "data_offset": 0, 00:15:26.495 "data_size": 65536 00:15:26.495 }, 00:15:26.495 { 00:15:26.495 "name": "BaseBdev2", 00:15:26.495 "uuid": "d61c2ccd-9acd-4485-947a-1bd5e5c76143", 00:15:26.495 "is_configured": true, 00:15:26.495 "data_offset": 0, 00:15:26.495 "data_size": 65536 00:15:26.495 }, 00:15:26.495 { 00:15:26.495 "name": "BaseBdev3", 00:15:26.495 "uuid": "71518757-d576-40d2-82ae-f4bd7adc3263", 00:15:26.495 "is_configured": true, 00:15:26.495 "data_offset": 0, 00:15:26.495 "data_size": 65536 00:15:26.495 } 00:15:26.495 ] 00:15:26.495 }' 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.495 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.063 [2024-12-06 13:09:33.410668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.063 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.063 "name": "Existed_Raid", 00:15:27.063 "aliases": [ 00:15:27.063 "cd4a8384-2556-4847-9795-c10318869aec" 00:15:27.063 ], 00:15:27.063 "product_name": "Raid Volume", 00:15:27.063 "block_size": 512, 00:15:27.063 "num_blocks": 196608, 00:15:27.063 "uuid": "cd4a8384-2556-4847-9795-c10318869aec", 00:15:27.063 "assigned_rate_limits": { 00:15:27.063 "rw_ios_per_sec": 0, 00:15:27.063 "rw_mbytes_per_sec": 0, 00:15:27.063 "r_mbytes_per_sec": 0, 00:15:27.063 "w_mbytes_per_sec": 0 00:15:27.063 }, 00:15:27.063 "claimed": false, 00:15:27.063 "zoned": false, 00:15:27.063 "supported_io_types": { 00:15:27.063 "read": true, 00:15:27.063 "write": true, 00:15:27.063 "unmap": true, 00:15:27.063 "flush": true, 00:15:27.063 "reset": true, 00:15:27.063 "nvme_admin": false, 00:15:27.063 "nvme_io": false, 00:15:27.063 "nvme_io_md": false, 00:15:27.063 "write_zeroes": true, 00:15:27.063 "zcopy": false, 00:15:27.063 "get_zone_info": false, 00:15:27.063 "zone_management": false, 00:15:27.063 "zone_append": false, 00:15:27.063 "compare": false, 00:15:27.063 "compare_and_write": false, 00:15:27.063 "abort": false, 00:15:27.063 "seek_hole": false, 00:15:27.063 "seek_data": false, 00:15:27.063 "copy": false, 00:15:27.063 "nvme_iov_md": false 00:15:27.063 }, 00:15:27.063 "memory_domains": [ 00:15:27.063 { 00:15:27.063 "dma_device_id": "system", 00:15:27.063 "dma_device_type": 1 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.063 "dma_device_type": 2 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "system", 00:15:27.063 "dma_device_type": 1 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.063 "dma_device_type": 2 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "system", 00:15:27.063 "dma_device_type": 1 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.063 "dma_device_type": 2 00:15:27.063 } 00:15:27.063 ], 00:15:27.063 "driver_specific": { 00:15:27.063 "raid": { 00:15:27.063 "uuid": "cd4a8384-2556-4847-9795-c10318869aec", 00:15:27.063 "strip_size_kb": 64, 00:15:27.063 "state": "online", 00:15:27.063 "raid_level": "concat", 00:15:27.063 "superblock": false, 00:15:27.063 "num_base_bdevs": 3, 00:15:27.063 "num_base_bdevs_discovered": 3, 00:15:27.063 "num_base_bdevs_operational": 3, 00:15:27.063 "base_bdevs_list": [ 00:15:27.063 { 00:15:27.063 "name": "BaseBdev1", 00:15:27.063 "uuid": "557ba091-44f9-4617-8255-0159fea1c026", 00:15:27.063 "is_configured": true, 00:15:27.063 "data_offset": 0, 00:15:27.063 "data_size": 65536 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "name": "BaseBdev2", 00:15:27.063 "uuid": "d61c2ccd-9acd-4485-947a-1bd5e5c76143", 00:15:27.063 "is_configured": true, 00:15:27.064 "data_offset": 0, 00:15:27.064 "data_size": 65536 00:15:27.064 }, 00:15:27.064 { 00:15:27.064 "name": "BaseBdev3", 00:15:27.064 "uuid": "71518757-d576-40d2-82ae-f4bd7adc3263", 00:15:27.064 "is_configured": true, 00:15:27.064 "data_offset": 0, 00:15:27.064 "data_size": 65536 00:15:27.064 } 00:15:27.064 ] 00:15:27.064 } 00:15:27.064 } 00:15:27.064 }' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:27.064 BaseBdev2 00:15:27.064 BaseBdev3' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.064 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.323 [2024-12-06 13:09:33.726372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.323 [2024-12-06 13:09:33.726412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.323 [2024-12-06 13:09:33.726513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:27.323 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.324 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.582 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.582 "name": "Existed_Raid", 00:15:27.582 "uuid": "cd4a8384-2556-4847-9795-c10318869aec", 00:15:27.582 "strip_size_kb": 64, 00:15:27.582 "state": "offline", 00:15:27.582 "raid_level": "concat", 00:15:27.582 "superblock": false, 00:15:27.582 "num_base_bdevs": 3, 00:15:27.582 "num_base_bdevs_discovered": 2, 00:15:27.582 "num_base_bdevs_operational": 2, 00:15:27.582 "base_bdevs_list": [ 00:15:27.582 { 00:15:27.582 "name": null, 00:15:27.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.582 "is_configured": false, 00:15:27.582 "data_offset": 0, 00:15:27.582 "data_size": 65536 00:15:27.582 }, 00:15:27.582 { 00:15:27.582 "name": "BaseBdev2", 00:15:27.582 "uuid": "d61c2ccd-9acd-4485-947a-1bd5e5c76143", 00:15:27.582 "is_configured": true, 00:15:27.582 "data_offset": 0, 00:15:27.582 "data_size": 65536 00:15:27.582 }, 00:15:27.582 { 00:15:27.582 "name": "BaseBdev3", 00:15:27.582 "uuid": "71518757-d576-40d2-82ae-f4bd7adc3263", 00:15:27.582 "is_configured": true, 00:15:27.582 "data_offset": 0, 00:15:27.582 "data_size": 65536 00:15:27.582 } 00:15:27.582 ] 00:15:27.582 }' 00:15:27.582 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.582 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.840 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.098 [2024-12-06 13:09:34.405685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.098 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.098 [2024-12-06 13:09:34.552616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.098 [2024-12-06 13:09:34.552855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.442 BaseBdev2 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.442 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.442 [ 00:15:28.442 { 00:15:28.442 "name": "BaseBdev2", 00:15:28.442 "aliases": [ 00:15:28.442 "d7f6e437-3a8e-474d-b35d-8773ff811926" 00:15:28.442 ], 00:15:28.442 "product_name": "Malloc disk", 00:15:28.442 "block_size": 512, 00:15:28.442 "num_blocks": 65536, 00:15:28.442 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:28.442 "assigned_rate_limits": { 00:15:28.442 "rw_ios_per_sec": 0, 00:15:28.442 "rw_mbytes_per_sec": 0, 00:15:28.442 "r_mbytes_per_sec": 0, 00:15:28.442 "w_mbytes_per_sec": 0 00:15:28.442 }, 00:15:28.442 "claimed": false, 00:15:28.442 "zoned": false, 00:15:28.442 "supported_io_types": { 00:15:28.442 "read": true, 00:15:28.442 "write": true, 00:15:28.443 "unmap": true, 00:15:28.443 "flush": true, 00:15:28.443 "reset": true, 00:15:28.443 "nvme_admin": false, 00:15:28.443 "nvme_io": false, 00:15:28.443 "nvme_io_md": false, 00:15:28.443 "write_zeroes": true, 00:15:28.443 "zcopy": true, 00:15:28.443 "get_zone_info": false, 00:15:28.443 "zone_management": false, 00:15:28.443 "zone_append": false, 00:15:28.443 "compare": false, 00:15:28.443 "compare_and_write": false, 00:15:28.443 "abort": true, 00:15:28.443 "seek_hole": false, 00:15:28.443 "seek_data": false, 00:15:28.443 "copy": true, 00:15:28.443 "nvme_iov_md": false 00:15:28.443 }, 00:15:28.443 "memory_domains": [ 00:15:28.443 { 00:15:28.443 "dma_device_id": "system", 00:15:28.443 "dma_device_type": 1 00:15:28.443 }, 00:15:28.443 { 00:15:28.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.443 "dma_device_type": 2 00:15:28.443 } 00:15:28.443 ], 00:15:28.443 "driver_specific": {} 00:15:28.443 } 00:15:28.443 ] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.443 BaseBdev3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.443 [ 00:15:28.443 { 00:15:28.443 "name": "BaseBdev3", 00:15:28.443 "aliases": [ 00:15:28.443 "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33" 00:15:28.443 ], 00:15:28.443 "product_name": "Malloc disk", 00:15:28.443 "block_size": 512, 00:15:28.443 "num_blocks": 65536, 00:15:28.443 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:28.443 "assigned_rate_limits": { 00:15:28.443 "rw_ios_per_sec": 0, 00:15:28.443 "rw_mbytes_per_sec": 0, 00:15:28.443 "r_mbytes_per_sec": 0, 00:15:28.443 "w_mbytes_per_sec": 0 00:15:28.443 }, 00:15:28.443 "claimed": false, 00:15:28.443 "zoned": false, 00:15:28.443 "supported_io_types": { 00:15:28.443 "read": true, 00:15:28.443 "write": true, 00:15:28.443 "unmap": true, 00:15:28.443 "flush": true, 00:15:28.443 "reset": true, 00:15:28.443 "nvme_admin": false, 00:15:28.443 "nvme_io": false, 00:15:28.443 "nvme_io_md": false, 00:15:28.443 "write_zeroes": true, 00:15:28.443 "zcopy": true, 00:15:28.443 "get_zone_info": false, 00:15:28.443 "zone_management": false, 00:15:28.443 "zone_append": false, 00:15:28.443 "compare": false, 00:15:28.443 "compare_and_write": false, 00:15:28.443 "abort": true, 00:15:28.443 "seek_hole": false, 00:15:28.443 "seek_data": false, 00:15:28.443 "copy": true, 00:15:28.443 "nvme_iov_md": false 00:15:28.443 }, 00:15:28.443 "memory_domains": [ 00:15:28.443 { 00:15:28.443 "dma_device_id": "system", 00:15:28.443 "dma_device_type": 1 00:15:28.443 }, 00:15:28.443 { 00:15:28.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.443 "dma_device_type": 2 00:15:28.443 } 00:15:28.443 ], 00:15:28.443 "driver_specific": {} 00:15:28.443 } 00:15:28.443 ] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.443 [2024-12-06 13:09:34.870313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.443 [2024-12-06 13:09:34.870376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.443 [2024-12-06 13:09:34.870411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.443 [2024-12-06 13:09:34.872980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.443 "name": "Existed_Raid", 00:15:28.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.443 "strip_size_kb": 64, 00:15:28.443 "state": "configuring", 00:15:28.443 "raid_level": "concat", 00:15:28.443 "superblock": false, 00:15:28.443 "num_base_bdevs": 3, 00:15:28.443 "num_base_bdevs_discovered": 2, 00:15:28.443 "num_base_bdevs_operational": 3, 00:15:28.443 "base_bdevs_list": [ 00:15:28.443 { 00:15:28.443 "name": "BaseBdev1", 00:15:28.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.443 "is_configured": false, 00:15:28.443 "data_offset": 0, 00:15:28.443 "data_size": 0 00:15:28.443 }, 00:15:28.443 { 00:15:28.443 "name": "BaseBdev2", 00:15:28.443 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:28.443 "is_configured": true, 00:15:28.443 "data_offset": 0, 00:15:28.443 "data_size": 65536 00:15:28.443 }, 00:15:28.443 { 00:15:28.443 "name": "BaseBdev3", 00:15:28.443 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:28.443 "is_configured": true, 00:15:28.443 "data_offset": 0, 00:15:28.443 "data_size": 65536 00:15:28.443 } 00:15:28.443 ] 00:15:28.443 }' 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.443 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.010 [2024-12-06 13:09:35.422551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.010 "name": "Existed_Raid", 00:15:29.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.010 "strip_size_kb": 64, 00:15:29.010 "state": "configuring", 00:15:29.010 "raid_level": "concat", 00:15:29.010 "superblock": false, 00:15:29.010 "num_base_bdevs": 3, 00:15:29.010 "num_base_bdevs_discovered": 1, 00:15:29.010 "num_base_bdevs_operational": 3, 00:15:29.010 "base_bdevs_list": [ 00:15:29.010 { 00:15:29.010 "name": "BaseBdev1", 00:15:29.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.010 "is_configured": false, 00:15:29.010 "data_offset": 0, 00:15:29.010 "data_size": 0 00:15:29.010 }, 00:15:29.010 { 00:15:29.010 "name": null, 00:15:29.010 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:29.010 "is_configured": false, 00:15:29.010 "data_offset": 0, 00:15:29.010 "data_size": 65536 00:15:29.010 }, 00:15:29.010 { 00:15:29.010 "name": "BaseBdev3", 00:15:29.010 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:29.010 "is_configured": true, 00:15:29.010 "data_offset": 0, 00:15:29.010 "data_size": 65536 00:15:29.010 } 00:15:29.010 ] 00:15:29.010 }' 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.010 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.576 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.576 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.576 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.576 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.576 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.576 [2024-12-06 13:09:36.081987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.576 BaseBdev1 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.576 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.835 [ 00:15:29.835 { 00:15:29.835 "name": "BaseBdev1", 00:15:29.835 "aliases": [ 00:15:29.835 "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977" 00:15:29.835 ], 00:15:29.835 "product_name": "Malloc disk", 00:15:29.835 "block_size": 512, 00:15:29.835 "num_blocks": 65536, 00:15:29.835 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:29.835 "assigned_rate_limits": { 00:15:29.835 "rw_ios_per_sec": 0, 00:15:29.835 "rw_mbytes_per_sec": 0, 00:15:29.835 "r_mbytes_per_sec": 0, 00:15:29.835 "w_mbytes_per_sec": 0 00:15:29.835 }, 00:15:29.835 "claimed": true, 00:15:29.835 "claim_type": "exclusive_write", 00:15:29.835 "zoned": false, 00:15:29.835 "supported_io_types": { 00:15:29.835 "read": true, 00:15:29.835 "write": true, 00:15:29.835 "unmap": true, 00:15:29.835 "flush": true, 00:15:29.835 "reset": true, 00:15:29.835 "nvme_admin": false, 00:15:29.835 "nvme_io": false, 00:15:29.835 "nvme_io_md": false, 00:15:29.835 "write_zeroes": true, 00:15:29.835 "zcopy": true, 00:15:29.835 "get_zone_info": false, 00:15:29.835 "zone_management": false, 00:15:29.835 "zone_append": false, 00:15:29.835 "compare": false, 00:15:29.835 "compare_and_write": false, 00:15:29.835 "abort": true, 00:15:29.835 "seek_hole": false, 00:15:29.835 "seek_data": false, 00:15:29.835 "copy": true, 00:15:29.835 "nvme_iov_md": false 00:15:29.835 }, 00:15:29.835 "memory_domains": [ 00:15:29.835 { 00:15:29.835 "dma_device_id": "system", 00:15:29.835 "dma_device_type": 1 00:15:29.835 }, 00:15:29.835 { 00:15:29.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.835 "dma_device_type": 2 00:15:29.835 } 00:15:29.835 ], 00:15:29.835 "driver_specific": {} 00:15:29.835 } 00:15:29.835 ] 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.835 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.836 "name": "Existed_Raid", 00:15:29.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.836 "strip_size_kb": 64, 00:15:29.836 "state": "configuring", 00:15:29.836 "raid_level": "concat", 00:15:29.836 "superblock": false, 00:15:29.836 "num_base_bdevs": 3, 00:15:29.836 "num_base_bdevs_discovered": 2, 00:15:29.836 "num_base_bdevs_operational": 3, 00:15:29.836 "base_bdevs_list": [ 00:15:29.836 { 00:15:29.836 "name": "BaseBdev1", 00:15:29.836 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:29.836 "is_configured": true, 00:15:29.836 "data_offset": 0, 00:15:29.836 "data_size": 65536 00:15:29.836 }, 00:15:29.836 { 00:15:29.836 "name": null, 00:15:29.836 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:29.836 "is_configured": false, 00:15:29.836 "data_offset": 0, 00:15:29.836 "data_size": 65536 00:15:29.836 }, 00:15:29.836 { 00:15:29.836 "name": "BaseBdev3", 00:15:29.836 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:29.836 "is_configured": true, 00:15:29.836 "data_offset": 0, 00:15:29.836 "data_size": 65536 00:15:29.836 } 00:15:29.836 ] 00:15:29.836 }' 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.836 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.402 [2024-12-06 13:09:36.674226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.402 "name": "Existed_Raid", 00:15:30.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.402 "strip_size_kb": 64, 00:15:30.402 "state": "configuring", 00:15:30.402 "raid_level": "concat", 00:15:30.402 "superblock": false, 00:15:30.402 "num_base_bdevs": 3, 00:15:30.402 "num_base_bdevs_discovered": 1, 00:15:30.402 "num_base_bdevs_operational": 3, 00:15:30.402 "base_bdevs_list": [ 00:15:30.402 { 00:15:30.402 "name": "BaseBdev1", 00:15:30.402 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:30.402 "is_configured": true, 00:15:30.402 "data_offset": 0, 00:15:30.402 "data_size": 65536 00:15:30.402 }, 00:15:30.402 { 00:15:30.402 "name": null, 00:15:30.402 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:30.402 "is_configured": false, 00:15:30.402 "data_offset": 0, 00:15:30.402 "data_size": 65536 00:15:30.402 }, 00:15:30.402 { 00:15:30.402 "name": null, 00:15:30.402 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:30.402 "is_configured": false, 00:15:30.402 "data_offset": 0, 00:15:30.402 "data_size": 65536 00:15:30.402 } 00:15:30.402 ] 00:15:30.402 }' 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.402 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.968 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.969 [2024-12-06 13:09:37.250424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.969 "name": "Existed_Raid", 00:15:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.969 "strip_size_kb": 64, 00:15:30.969 "state": "configuring", 00:15:30.969 "raid_level": "concat", 00:15:30.969 "superblock": false, 00:15:30.969 "num_base_bdevs": 3, 00:15:30.969 "num_base_bdevs_discovered": 2, 00:15:30.969 "num_base_bdevs_operational": 3, 00:15:30.969 "base_bdevs_list": [ 00:15:30.969 { 00:15:30.969 "name": "BaseBdev1", 00:15:30.969 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 0, 00:15:30.969 "data_size": 65536 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": null, 00:15:30.969 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:30.969 "is_configured": false, 00:15:30.969 "data_offset": 0, 00:15:30.969 "data_size": 65536 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": "BaseBdev3", 00:15:30.969 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 0, 00:15:30.969 "data_size": 65536 00:15:30.969 } 00:15:30.969 ] 00:15:30.969 }' 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.969 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.574 [2024-12-06 13:09:37.854640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.574 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.575 "name": "Existed_Raid", 00:15:31.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.575 "strip_size_kb": 64, 00:15:31.575 "state": "configuring", 00:15:31.575 "raid_level": "concat", 00:15:31.575 "superblock": false, 00:15:31.575 "num_base_bdevs": 3, 00:15:31.575 "num_base_bdevs_discovered": 1, 00:15:31.575 "num_base_bdevs_operational": 3, 00:15:31.575 "base_bdevs_list": [ 00:15:31.575 { 00:15:31.575 "name": null, 00:15:31.575 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:31.575 "is_configured": false, 00:15:31.575 "data_offset": 0, 00:15:31.575 "data_size": 65536 00:15:31.575 }, 00:15:31.575 { 00:15:31.575 "name": null, 00:15:31.575 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:31.575 "is_configured": false, 00:15:31.575 "data_offset": 0, 00:15:31.575 "data_size": 65536 00:15:31.575 }, 00:15:31.575 { 00:15:31.575 "name": "BaseBdev3", 00:15:31.575 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:31.575 "is_configured": true, 00:15:31.575 "data_offset": 0, 00:15:31.575 "data_size": 65536 00:15:31.575 } 00:15:31.575 ] 00:15:31.575 }' 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.575 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.139 [2024-12-06 13:09:38.478475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.139 "name": "Existed_Raid", 00:15:32.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.139 "strip_size_kb": 64, 00:15:32.139 "state": "configuring", 00:15:32.139 "raid_level": "concat", 00:15:32.139 "superblock": false, 00:15:32.139 "num_base_bdevs": 3, 00:15:32.139 "num_base_bdevs_discovered": 2, 00:15:32.139 "num_base_bdevs_operational": 3, 00:15:32.139 "base_bdevs_list": [ 00:15:32.139 { 00:15:32.139 "name": null, 00:15:32.139 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:32.139 "is_configured": false, 00:15:32.139 "data_offset": 0, 00:15:32.139 "data_size": 65536 00:15:32.139 }, 00:15:32.139 { 00:15:32.139 "name": "BaseBdev2", 00:15:32.139 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:32.139 "is_configured": true, 00:15:32.139 "data_offset": 0, 00:15:32.139 "data_size": 65536 00:15:32.139 }, 00:15:32.139 { 00:15:32.139 "name": "BaseBdev3", 00:15:32.139 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:32.139 "is_configured": true, 00:15:32.139 "data_offset": 0, 00:15:32.139 "data_size": 65536 00:15:32.139 } 00:15:32.139 ] 00:15:32.139 }' 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.139 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.705 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 [2024-12-06 13:09:39.131728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.705 [2024-12-06 13:09:39.131786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.705 [2024-12-06 13:09:39.131801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:32.705 [2024-12-06 13:09:39.132106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:32.705 [2024-12-06 13:09:39.132287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.705 [2024-12-06 13:09:39.132302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.705 [2024-12-06 13:09:39.132673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.705 NewBaseBdev 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 [ 00:15:32.705 { 00:15:32.705 "name": "NewBaseBdev", 00:15:32.705 "aliases": [ 00:15:32.705 "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977" 00:15:32.705 ], 00:15:32.705 "product_name": "Malloc disk", 00:15:32.705 "block_size": 512, 00:15:32.705 "num_blocks": 65536, 00:15:32.705 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:32.705 "assigned_rate_limits": { 00:15:32.705 "rw_ios_per_sec": 0, 00:15:32.705 "rw_mbytes_per_sec": 0, 00:15:32.705 "r_mbytes_per_sec": 0, 00:15:32.705 "w_mbytes_per_sec": 0 00:15:32.705 }, 00:15:32.705 "claimed": true, 00:15:32.705 "claim_type": "exclusive_write", 00:15:32.705 "zoned": false, 00:15:32.705 "supported_io_types": { 00:15:32.705 "read": true, 00:15:32.705 "write": true, 00:15:32.705 "unmap": true, 00:15:32.705 "flush": true, 00:15:32.705 "reset": true, 00:15:32.705 "nvme_admin": false, 00:15:32.705 "nvme_io": false, 00:15:32.705 "nvme_io_md": false, 00:15:32.705 "write_zeroes": true, 00:15:32.705 "zcopy": true, 00:15:32.705 "get_zone_info": false, 00:15:32.705 "zone_management": false, 00:15:32.705 "zone_append": false, 00:15:32.705 "compare": false, 00:15:32.705 "compare_and_write": false, 00:15:32.705 "abort": true, 00:15:32.705 "seek_hole": false, 00:15:32.705 "seek_data": false, 00:15:32.705 "copy": true, 00:15:32.705 "nvme_iov_md": false 00:15:32.705 }, 00:15:32.705 "memory_domains": [ 00:15:32.705 { 00:15:32.705 "dma_device_id": "system", 00:15:32.705 "dma_device_type": 1 00:15:32.705 }, 00:15:32.705 { 00:15:32.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.705 "dma_device_type": 2 00:15:32.705 } 00:15:32.705 ], 00:15:32.705 "driver_specific": {} 00:15:32.705 } 00:15:32.705 ] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.705 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.974 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.974 "name": "Existed_Raid", 00:15:32.974 "uuid": "cd15ea51-b7c3-4c91-a25b-ba8a55814f50", 00:15:32.974 "strip_size_kb": 64, 00:15:32.974 "state": "online", 00:15:32.974 "raid_level": "concat", 00:15:32.974 "superblock": false, 00:15:32.974 "num_base_bdevs": 3, 00:15:32.974 "num_base_bdevs_discovered": 3, 00:15:32.974 "num_base_bdevs_operational": 3, 00:15:32.974 "base_bdevs_list": [ 00:15:32.974 { 00:15:32.974 "name": "NewBaseBdev", 00:15:32.974 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 0, 00:15:32.974 "data_size": 65536 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "name": "BaseBdev2", 00:15:32.974 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 0, 00:15:32.974 "data_size": 65536 00:15:32.974 }, 00:15:32.974 { 00:15:32.974 "name": "BaseBdev3", 00:15:32.974 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:32.974 "is_configured": true, 00:15:32.974 "data_offset": 0, 00:15:32.974 "data_size": 65536 00:15:32.974 } 00:15:32.974 ] 00:15:32.974 }' 00:15:32.974 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.974 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.232 [2024-12-06 13:09:39.708354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.232 "name": "Existed_Raid", 00:15:33.232 "aliases": [ 00:15:33.232 "cd15ea51-b7c3-4c91-a25b-ba8a55814f50" 00:15:33.232 ], 00:15:33.232 "product_name": "Raid Volume", 00:15:33.232 "block_size": 512, 00:15:33.232 "num_blocks": 196608, 00:15:33.232 "uuid": "cd15ea51-b7c3-4c91-a25b-ba8a55814f50", 00:15:33.232 "assigned_rate_limits": { 00:15:33.232 "rw_ios_per_sec": 0, 00:15:33.232 "rw_mbytes_per_sec": 0, 00:15:33.232 "r_mbytes_per_sec": 0, 00:15:33.232 "w_mbytes_per_sec": 0 00:15:33.232 }, 00:15:33.232 "claimed": false, 00:15:33.232 "zoned": false, 00:15:33.232 "supported_io_types": { 00:15:33.232 "read": true, 00:15:33.232 "write": true, 00:15:33.232 "unmap": true, 00:15:33.232 "flush": true, 00:15:33.232 "reset": true, 00:15:33.232 "nvme_admin": false, 00:15:33.232 "nvme_io": false, 00:15:33.232 "nvme_io_md": false, 00:15:33.232 "write_zeroes": true, 00:15:33.232 "zcopy": false, 00:15:33.232 "get_zone_info": false, 00:15:33.232 "zone_management": false, 00:15:33.232 "zone_append": false, 00:15:33.232 "compare": false, 00:15:33.232 "compare_and_write": false, 00:15:33.232 "abort": false, 00:15:33.232 "seek_hole": false, 00:15:33.232 "seek_data": false, 00:15:33.232 "copy": false, 00:15:33.232 "nvme_iov_md": false 00:15:33.232 }, 00:15:33.232 "memory_domains": [ 00:15:33.232 { 00:15:33.232 "dma_device_id": "system", 00:15:33.232 "dma_device_type": 1 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.232 "dma_device_type": 2 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "dma_device_id": "system", 00:15:33.232 "dma_device_type": 1 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.232 "dma_device_type": 2 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "dma_device_id": "system", 00:15:33.232 "dma_device_type": 1 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.232 "dma_device_type": 2 00:15:33.232 } 00:15:33.232 ], 00:15:33.232 "driver_specific": { 00:15:33.232 "raid": { 00:15:33.232 "uuid": "cd15ea51-b7c3-4c91-a25b-ba8a55814f50", 00:15:33.232 "strip_size_kb": 64, 00:15:33.232 "state": "online", 00:15:33.232 "raid_level": "concat", 00:15:33.232 "superblock": false, 00:15:33.232 "num_base_bdevs": 3, 00:15:33.232 "num_base_bdevs_discovered": 3, 00:15:33.232 "num_base_bdevs_operational": 3, 00:15:33.232 "base_bdevs_list": [ 00:15:33.232 { 00:15:33.232 "name": "NewBaseBdev", 00:15:33.232 "uuid": "a2ee1e2b-f5fe-45a9-8ccf-f9bdb15c1977", 00:15:33.232 "is_configured": true, 00:15:33.232 "data_offset": 0, 00:15:33.232 "data_size": 65536 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "name": "BaseBdev2", 00:15:33.232 "uuid": "d7f6e437-3a8e-474d-b35d-8773ff811926", 00:15:33.232 "is_configured": true, 00:15:33.232 "data_offset": 0, 00:15:33.232 "data_size": 65536 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "name": "BaseBdev3", 00:15:33.232 "uuid": "6cb7d8ee-5ee6-4bed-86dc-aa63b9a33c33", 00:15:33.232 "is_configured": true, 00:15:33.232 "data_offset": 0, 00:15:33.232 "data_size": 65536 00:15:33.232 } 00:15:33.232 ] 00:15:33.232 } 00:15:33.232 } 00:15:33.232 }' 00:15:33.232 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.490 BaseBdev2 00:15:33.490 BaseBdev3' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.490 13:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.747 [2024-12-06 13:09:40.032089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.747 [2024-12-06 13:09:40.032128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.747 [2024-12-06 13:09:40.032272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.747 [2024-12-06 13:09:40.032358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.747 [2024-12-06 13:09:40.032379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65877 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65877 ']' 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65877 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65877 00:15:33.747 killing process with pid 65877 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65877' 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65877 00:15:33.747 [2024-12-06 13:09:40.070168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.747 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65877 00:15:34.003 [2024-12-06 13:09:40.359904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:35.418 00:15:35.418 real 0m12.242s 00:15:35.418 user 0m20.139s 00:15:35.418 sys 0m1.744s 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.418 ************************************ 00:15:35.418 END TEST raid_state_function_test 00:15:35.418 ************************************ 00:15:35.418 13:09:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:35.418 13:09:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.418 13:09:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.418 13:09:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.418 ************************************ 00:15:35.418 START TEST raid_state_function_test_sb 00:15:35.418 ************************************ 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66515 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66515' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:35.418 Process raid pid: 66515 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66515 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66515 ']' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.418 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.418 [2024-12-06 13:09:41.724217] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:35.418 [2024-12-06 13:09:41.724779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.418 [2024-12-06 13:09:41.926757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.678 [2024-12-06 13:09:42.120680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.936 [2024-12-06 13:09:42.351034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.936 [2024-12-06 13:09:42.351109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.193 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.193 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:36.193 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:36.193 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.193 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.193 [2024-12-06 13:09:42.716184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.193 [2024-12-06 13:09:42.716262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.193 [2024-12-06 13:09:42.716280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.193 [2024-12-06 13:09:42.716298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.193 [2024-12-06 13:09:42.716308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.193 [2024-12-06 13:09:42.716324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.451 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.451 "name": "Existed_Raid", 00:15:36.451 "uuid": "2274afdb-27c5-4c35-9d5c-287df3660de4", 00:15:36.451 "strip_size_kb": 64, 00:15:36.451 "state": "configuring", 00:15:36.451 "raid_level": "concat", 00:15:36.451 "superblock": true, 00:15:36.451 "num_base_bdevs": 3, 00:15:36.451 "num_base_bdevs_discovered": 0, 00:15:36.451 "num_base_bdevs_operational": 3, 00:15:36.451 "base_bdevs_list": [ 00:15:36.451 { 00:15:36.451 "name": "BaseBdev1", 00:15:36.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.451 "is_configured": false, 00:15:36.451 "data_offset": 0, 00:15:36.451 "data_size": 0 00:15:36.451 }, 00:15:36.451 { 00:15:36.451 "name": "BaseBdev2", 00:15:36.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.451 "is_configured": false, 00:15:36.451 "data_offset": 0, 00:15:36.451 "data_size": 0 00:15:36.451 }, 00:15:36.451 { 00:15:36.451 "name": "BaseBdev3", 00:15:36.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.451 "is_configured": false, 00:15:36.451 "data_offset": 0, 00:15:36.451 "data_size": 0 00:15:36.451 } 00:15:36.451 ] 00:15:36.451 }' 00:15:36.452 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.452 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.710 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:36.710 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.710 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 [2024-12-06 13:09:43.240339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.969 [2024-12-06 13:09:43.240390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 [2024-12-06 13:09:43.248299] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.969 [2024-12-06 13:09:43.248360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.969 [2024-12-06 13:09:43.248376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.969 [2024-12-06 13:09:43.248392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.969 [2024-12-06 13:09:43.248403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.969 [2024-12-06 13:09:43.248418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 [2024-12-06 13:09:43.296755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.969 BaseBdev1 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 [ 00:15:36.969 { 00:15:36.969 "name": "BaseBdev1", 00:15:36.969 "aliases": [ 00:15:36.969 "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f" 00:15:36.969 ], 00:15:36.969 "product_name": "Malloc disk", 00:15:36.969 "block_size": 512, 00:15:36.969 "num_blocks": 65536, 00:15:36.969 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:36.969 "assigned_rate_limits": { 00:15:36.969 "rw_ios_per_sec": 0, 00:15:36.969 "rw_mbytes_per_sec": 0, 00:15:36.969 "r_mbytes_per_sec": 0, 00:15:36.969 "w_mbytes_per_sec": 0 00:15:36.969 }, 00:15:36.969 "claimed": true, 00:15:36.969 "claim_type": "exclusive_write", 00:15:36.969 "zoned": false, 00:15:36.969 "supported_io_types": { 00:15:36.969 "read": true, 00:15:36.969 "write": true, 00:15:36.969 "unmap": true, 00:15:36.969 "flush": true, 00:15:36.969 "reset": true, 00:15:36.969 "nvme_admin": false, 00:15:36.969 "nvme_io": false, 00:15:36.969 "nvme_io_md": false, 00:15:36.969 "write_zeroes": true, 00:15:36.969 "zcopy": true, 00:15:36.969 "get_zone_info": false, 00:15:36.969 "zone_management": false, 00:15:36.969 "zone_append": false, 00:15:36.969 "compare": false, 00:15:36.969 "compare_and_write": false, 00:15:36.969 "abort": true, 00:15:36.969 "seek_hole": false, 00:15:36.969 "seek_data": false, 00:15:36.969 "copy": true, 00:15:36.969 "nvme_iov_md": false 00:15:36.969 }, 00:15:36.969 "memory_domains": [ 00:15:36.969 { 00:15:36.969 "dma_device_id": "system", 00:15:36.969 "dma_device_type": 1 00:15:36.969 }, 00:15:36.969 { 00:15:36.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.969 "dma_device_type": 2 00:15:36.969 } 00:15:36.969 ], 00:15:36.969 "driver_specific": {} 00:15:36.969 } 00:15:36.969 ] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.969 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.970 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.970 "name": "Existed_Raid", 00:15:36.970 "uuid": "78c59ec7-c910-4084-9255-7c7ebcc23139", 00:15:36.970 "strip_size_kb": 64, 00:15:36.970 "state": "configuring", 00:15:36.970 "raid_level": "concat", 00:15:36.970 "superblock": true, 00:15:36.970 "num_base_bdevs": 3, 00:15:36.970 "num_base_bdevs_discovered": 1, 00:15:36.970 "num_base_bdevs_operational": 3, 00:15:36.970 "base_bdevs_list": [ 00:15:36.970 { 00:15:36.970 "name": "BaseBdev1", 00:15:36.970 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:36.970 "is_configured": true, 00:15:36.970 "data_offset": 2048, 00:15:36.970 "data_size": 63488 00:15:36.970 }, 00:15:36.970 { 00:15:36.970 "name": "BaseBdev2", 00:15:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.970 "is_configured": false, 00:15:36.970 "data_offset": 0, 00:15:36.970 "data_size": 0 00:15:36.970 }, 00:15:36.970 { 00:15:36.970 "name": "BaseBdev3", 00:15:36.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.970 "is_configured": false, 00:15:36.970 "data_offset": 0, 00:15:36.970 "data_size": 0 00:15:36.970 } 00:15:36.970 ] 00:15:36.970 }' 00:15:36.970 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.970 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.536 [2024-12-06 13:09:43.865007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.536 [2024-12-06 13:09:43.865079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.536 [2024-12-06 13:09:43.877058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.536 [2024-12-06 13:09:43.879839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.536 [2024-12-06 13:09:43.880074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.536 [2024-12-06 13:09:43.880193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.536 [2024-12-06 13:09:43.880254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.536 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.537 "name": "Existed_Raid", 00:15:37.537 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:37.537 "strip_size_kb": 64, 00:15:37.537 "state": "configuring", 00:15:37.537 "raid_level": "concat", 00:15:37.537 "superblock": true, 00:15:37.537 "num_base_bdevs": 3, 00:15:37.537 "num_base_bdevs_discovered": 1, 00:15:37.537 "num_base_bdevs_operational": 3, 00:15:37.537 "base_bdevs_list": [ 00:15:37.537 { 00:15:37.537 "name": "BaseBdev1", 00:15:37.537 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:37.537 "is_configured": true, 00:15:37.537 "data_offset": 2048, 00:15:37.537 "data_size": 63488 00:15:37.537 }, 00:15:37.537 { 00:15:37.537 "name": "BaseBdev2", 00:15:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.537 "is_configured": false, 00:15:37.537 "data_offset": 0, 00:15:37.537 "data_size": 0 00:15:37.537 }, 00:15:37.537 { 00:15:37.537 "name": "BaseBdev3", 00:15:37.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.537 "is_configured": false, 00:15:37.537 "data_offset": 0, 00:15:37.537 "data_size": 0 00:15:37.537 } 00:15:37.537 ] 00:15:37.537 }' 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.537 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.104 [2024-12-06 13:09:44.410938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.104 BaseBdev2 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.104 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.104 [ 00:15:38.104 { 00:15:38.104 "name": "BaseBdev2", 00:15:38.104 "aliases": [ 00:15:38.104 "e5f6f931-7483-4def-9a92-20f3f36ea2fe" 00:15:38.104 ], 00:15:38.104 "product_name": "Malloc disk", 00:15:38.104 "block_size": 512, 00:15:38.104 "num_blocks": 65536, 00:15:38.104 "uuid": "e5f6f931-7483-4def-9a92-20f3f36ea2fe", 00:15:38.104 "assigned_rate_limits": { 00:15:38.104 "rw_ios_per_sec": 0, 00:15:38.104 "rw_mbytes_per_sec": 0, 00:15:38.104 "r_mbytes_per_sec": 0, 00:15:38.104 "w_mbytes_per_sec": 0 00:15:38.104 }, 00:15:38.104 "claimed": true, 00:15:38.104 "claim_type": "exclusive_write", 00:15:38.104 "zoned": false, 00:15:38.104 "supported_io_types": { 00:15:38.104 "read": true, 00:15:38.104 "write": true, 00:15:38.104 "unmap": true, 00:15:38.104 "flush": true, 00:15:38.104 "reset": true, 00:15:38.105 "nvme_admin": false, 00:15:38.105 "nvme_io": false, 00:15:38.105 "nvme_io_md": false, 00:15:38.105 "write_zeroes": true, 00:15:38.105 "zcopy": true, 00:15:38.105 "get_zone_info": false, 00:15:38.105 "zone_management": false, 00:15:38.105 "zone_append": false, 00:15:38.105 "compare": false, 00:15:38.105 "compare_and_write": false, 00:15:38.105 "abort": true, 00:15:38.105 "seek_hole": false, 00:15:38.105 "seek_data": false, 00:15:38.105 "copy": true, 00:15:38.105 "nvme_iov_md": false 00:15:38.105 }, 00:15:38.105 "memory_domains": [ 00:15:38.105 { 00:15:38.105 "dma_device_id": "system", 00:15:38.105 "dma_device_type": 1 00:15:38.105 }, 00:15:38.105 { 00:15:38.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.105 "dma_device_type": 2 00:15:38.105 } 00:15:38.105 ], 00:15:38.105 "driver_specific": {} 00:15:38.105 } 00:15:38.105 ] 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.105 "name": "Existed_Raid", 00:15:38.105 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:38.105 "strip_size_kb": 64, 00:15:38.105 "state": "configuring", 00:15:38.105 "raid_level": "concat", 00:15:38.105 "superblock": true, 00:15:38.105 "num_base_bdevs": 3, 00:15:38.105 "num_base_bdevs_discovered": 2, 00:15:38.105 "num_base_bdevs_operational": 3, 00:15:38.105 "base_bdevs_list": [ 00:15:38.105 { 00:15:38.105 "name": "BaseBdev1", 00:15:38.105 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:38.105 "is_configured": true, 00:15:38.105 "data_offset": 2048, 00:15:38.105 "data_size": 63488 00:15:38.105 }, 00:15:38.105 { 00:15:38.105 "name": "BaseBdev2", 00:15:38.105 "uuid": "e5f6f931-7483-4def-9a92-20f3f36ea2fe", 00:15:38.105 "is_configured": true, 00:15:38.105 "data_offset": 2048, 00:15:38.105 "data_size": 63488 00:15:38.105 }, 00:15:38.105 { 00:15:38.105 "name": "BaseBdev3", 00:15:38.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.105 "is_configured": false, 00:15:38.105 "data_offset": 0, 00:15:38.105 "data_size": 0 00:15:38.105 } 00:15:38.105 ] 00:15:38.105 }' 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.105 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.784 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.784 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 [2024-12-06 13:09:45.032288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.784 [2024-12-06 13:09:45.032664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:38.784 [2024-12-06 13:09:45.032694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:38.784 BaseBdev3 00:15:38.784 [2024-12-06 13:09:45.033043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:38.784 [2024-12-06 13:09:45.033272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:38.784 [2024-12-06 13:09:45.033291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:38.784 [2024-12-06 13:09:45.033504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.784 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.784 [ 00:15:38.784 { 00:15:38.784 "name": "BaseBdev3", 00:15:38.784 "aliases": [ 00:15:38.784 "16ea2459-0fe0-4b4c-9feb-9332e4a1bb52" 00:15:38.784 ], 00:15:38.784 "product_name": "Malloc disk", 00:15:38.784 "block_size": 512, 00:15:38.784 "num_blocks": 65536, 00:15:38.784 "uuid": "16ea2459-0fe0-4b4c-9feb-9332e4a1bb52", 00:15:38.784 "assigned_rate_limits": { 00:15:38.784 "rw_ios_per_sec": 0, 00:15:38.784 "rw_mbytes_per_sec": 0, 00:15:38.784 "r_mbytes_per_sec": 0, 00:15:38.784 "w_mbytes_per_sec": 0 00:15:38.784 }, 00:15:38.784 "claimed": true, 00:15:38.784 "claim_type": "exclusive_write", 00:15:38.784 "zoned": false, 00:15:38.784 "supported_io_types": { 00:15:38.784 "read": true, 00:15:38.784 "write": true, 00:15:38.784 "unmap": true, 00:15:38.784 "flush": true, 00:15:38.784 "reset": true, 00:15:38.784 "nvme_admin": false, 00:15:38.784 "nvme_io": false, 00:15:38.784 "nvme_io_md": false, 00:15:38.784 "write_zeroes": true, 00:15:38.784 "zcopy": true, 00:15:38.784 "get_zone_info": false, 00:15:38.784 "zone_management": false, 00:15:38.784 "zone_append": false, 00:15:38.784 "compare": false, 00:15:38.784 "compare_and_write": false, 00:15:38.784 "abort": true, 00:15:38.784 "seek_hole": false, 00:15:38.784 "seek_data": false, 00:15:38.784 "copy": true, 00:15:38.784 "nvme_iov_md": false 00:15:38.784 }, 00:15:38.784 "memory_domains": [ 00:15:38.784 { 00:15:38.784 "dma_device_id": "system", 00:15:38.784 "dma_device_type": 1 00:15:38.784 }, 00:15:38.785 { 00:15:38.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.785 "dma_device_type": 2 00:15:38.785 } 00:15:38.785 ], 00:15:38.785 "driver_specific": {} 00:15:38.785 } 00:15:38.785 ] 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.785 "name": "Existed_Raid", 00:15:38.785 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:38.785 "strip_size_kb": 64, 00:15:38.785 "state": "online", 00:15:38.785 "raid_level": "concat", 00:15:38.785 "superblock": true, 00:15:38.785 "num_base_bdevs": 3, 00:15:38.785 "num_base_bdevs_discovered": 3, 00:15:38.785 "num_base_bdevs_operational": 3, 00:15:38.785 "base_bdevs_list": [ 00:15:38.785 { 00:15:38.785 "name": "BaseBdev1", 00:15:38.785 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:38.785 "is_configured": true, 00:15:38.785 "data_offset": 2048, 00:15:38.785 "data_size": 63488 00:15:38.785 }, 00:15:38.785 { 00:15:38.785 "name": "BaseBdev2", 00:15:38.785 "uuid": "e5f6f931-7483-4def-9a92-20f3f36ea2fe", 00:15:38.785 "is_configured": true, 00:15:38.785 "data_offset": 2048, 00:15:38.785 "data_size": 63488 00:15:38.785 }, 00:15:38.785 { 00:15:38.785 "name": "BaseBdev3", 00:15:38.785 "uuid": "16ea2459-0fe0-4b4c-9feb-9332e4a1bb52", 00:15:38.785 "is_configured": true, 00:15:38.785 "data_offset": 2048, 00:15:38.785 "data_size": 63488 00:15:38.785 } 00:15:38.785 ] 00:15:38.785 }' 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.785 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.059 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.059 [2024-12-06 13:09:45.572929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.317 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.317 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.317 "name": "Existed_Raid", 00:15:39.317 "aliases": [ 00:15:39.317 "90f408f0-b820-4428-8f30-6f21f7e7c7b2" 00:15:39.317 ], 00:15:39.318 "product_name": "Raid Volume", 00:15:39.318 "block_size": 512, 00:15:39.318 "num_blocks": 190464, 00:15:39.318 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:39.318 "assigned_rate_limits": { 00:15:39.318 "rw_ios_per_sec": 0, 00:15:39.318 "rw_mbytes_per_sec": 0, 00:15:39.318 "r_mbytes_per_sec": 0, 00:15:39.318 "w_mbytes_per_sec": 0 00:15:39.318 }, 00:15:39.318 "claimed": false, 00:15:39.318 "zoned": false, 00:15:39.318 "supported_io_types": { 00:15:39.318 "read": true, 00:15:39.318 "write": true, 00:15:39.318 "unmap": true, 00:15:39.318 "flush": true, 00:15:39.318 "reset": true, 00:15:39.318 "nvme_admin": false, 00:15:39.318 "nvme_io": false, 00:15:39.318 "nvme_io_md": false, 00:15:39.318 "write_zeroes": true, 00:15:39.318 "zcopy": false, 00:15:39.318 "get_zone_info": false, 00:15:39.318 "zone_management": false, 00:15:39.318 "zone_append": false, 00:15:39.318 "compare": false, 00:15:39.318 "compare_and_write": false, 00:15:39.318 "abort": false, 00:15:39.318 "seek_hole": false, 00:15:39.318 "seek_data": false, 00:15:39.318 "copy": false, 00:15:39.318 "nvme_iov_md": false 00:15:39.318 }, 00:15:39.318 "memory_domains": [ 00:15:39.318 { 00:15:39.318 "dma_device_id": "system", 00:15:39.318 "dma_device_type": 1 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.318 "dma_device_type": 2 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "dma_device_id": "system", 00:15:39.318 "dma_device_type": 1 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.318 "dma_device_type": 2 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "dma_device_id": "system", 00:15:39.318 "dma_device_type": 1 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.318 "dma_device_type": 2 00:15:39.318 } 00:15:39.318 ], 00:15:39.318 "driver_specific": { 00:15:39.318 "raid": { 00:15:39.318 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:39.318 "strip_size_kb": 64, 00:15:39.318 "state": "online", 00:15:39.318 "raid_level": "concat", 00:15:39.318 "superblock": true, 00:15:39.318 "num_base_bdevs": 3, 00:15:39.318 "num_base_bdevs_discovered": 3, 00:15:39.318 "num_base_bdevs_operational": 3, 00:15:39.318 "base_bdevs_list": [ 00:15:39.318 { 00:15:39.318 "name": "BaseBdev1", 00:15:39.318 "uuid": "39eab8d2-b5c4-4fed-a0ac-3af24da2b94f", 00:15:39.318 "is_configured": true, 00:15:39.318 "data_offset": 2048, 00:15:39.318 "data_size": 63488 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "name": "BaseBdev2", 00:15:39.318 "uuid": "e5f6f931-7483-4def-9a92-20f3f36ea2fe", 00:15:39.318 "is_configured": true, 00:15:39.318 "data_offset": 2048, 00:15:39.318 "data_size": 63488 00:15:39.318 }, 00:15:39.318 { 00:15:39.318 "name": "BaseBdev3", 00:15:39.318 "uuid": "16ea2459-0fe0-4b4c-9feb-9332e4a1bb52", 00:15:39.318 "is_configured": true, 00:15:39.318 "data_offset": 2048, 00:15:39.318 "data_size": 63488 00:15:39.318 } 00:15:39.318 ] 00:15:39.318 } 00:15:39.318 } 00:15:39.318 }' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:39.318 BaseBdev2 00:15:39.318 BaseBdev3' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.318 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.579 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.580 [2024-12-06 13:09:45.852627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.580 [2024-12-06 13:09:45.852668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.580 [2024-12-06 13:09:45.852745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.580 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.580 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.580 "name": "Existed_Raid", 00:15:39.580 "uuid": "90f408f0-b820-4428-8f30-6f21f7e7c7b2", 00:15:39.580 "strip_size_kb": 64, 00:15:39.580 "state": "offline", 00:15:39.580 "raid_level": "concat", 00:15:39.580 "superblock": true, 00:15:39.580 "num_base_bdevs": 3, 00:15:39.580 "num_base_bdevs_discovered": 2, 00:15:39.580 "num_base_bdevs_operational": 2, 00:15:39.580 "base_bdevs_list": [ 00:15:39.580 { 00:15:39.580 "name": null, 00:15:39.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.580 "is_configured": false, 00:15:39.580 "data_offset": 0, 00:15:39.580 "data_size": 63488 00:15:39.580 }, 00:15:39.580 { 00:15:39.580 "name": "BaseBdev2", 00:15:39.580 "uuid": "e5f6f931-7483-4def-9a92-20f3f36ea2fe", 00:15:39.580 "is_configured": true, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 }, 00:15:39.580 { 00:15:39.580 "name": "BaseBdev3", 00:15:39.580 "uuid": "16ea2459-0fe0-4b4c-9feb-9332e4a1bb52", 00:15:39.580 "is_configured": true, 00:15:39.580 "data_offset": 2048, 00:15:39.580 "data_size": 63488 00:15:39.580 } 00:15:39.580 ] 00:15:39.580 }' 00:15:39.580 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.580 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.147 [2024-12-06 13:09:46.544801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.147 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.406 [2024-12-06 13:09:46.690672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.406 [2024-12-06 13:09:46.690750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.406 BaseBdev2 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.406 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.406 [ 00:15:40.406 { 00:15:40.406 "name": "BaseBdev2", 00:15:40.406 "aliases": [ 00:15:40.406 "106072fd-e77f-4318-a696-fae08a2b3166" 00:15:40.406 ], 00:15:40.407 "product_name": "Malloc disk", 00:15:40.407 "block_size": 512, 00:15:40.407 "num_blocks": 65536, 00:15:40.407 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:40.407 "assigned_rate_limits": { 00:15:40.407 "rw_ios_per_sec": 0, 00:15:40.407 "rw_mbytes_per_sec": 0, 00:15:40.407 "r_mbytes_per_sec": 0, 00:15:40.407 "w_mbytes_per_sec": 0 00:15:40.407 }, 00:15:40.407 "claimed": false, 00:15:40.407 "zoned": false, 00:15:40.407 "supported_io_types": { 00:15:40.407 "read": true, 00:15:40.407 "write": true, 00:15:40.407 "unmap": true, 00:15:40.407 "flush": true, 00:15:40.407 "reset": true, 00:15:40.407 "nvme_admin": false, 00:15:40.407 "nvme_io": false, 00:15:40.407 "nvme_io_md": false, 00:15:40.407 "write_zeroes": true, 00:15:40.407 "zcopy": true, 00:15:40.407 "get_zone_info": false, 00:15:40.407 "zone_management": false, 00:15:40.407 "zone_append": false, 00:15:40.407 "compare": false, 00:15:40.407 "compare_and_write": false, 00:15:40.407 "abort": true, 00:15:40.407 "seek_hole": false, 00:15:40.407 "seek_data": false, 00:15:40.407 "copy": true, 00:15:40.407 "nvme_iov_md": false 00:15:40.407 }, 00:15:40.407 "memory_domains": [ 00:15:40.407 { 00:15:40.407 "dma_device_id": "system", 00:15:40.407 "dma_device_type": 1 00:15:40.407 }, 00:15:40.407 { 00:15:40.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.407 "dma_device_type": 2 00:15:40.407 } 00:15:40.407 ], 00:15:40.407 "driver_specific": {} 00:15:40.407 } 00:15:40.407 ] 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.407 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.666 BaseBdev3 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.666 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.666 [ 00:15:40.666 { 00:15:40.666 "name": "BaseBdev3", 00:15:40.666 "aliases": [ 00:15:40.666 "69b043c5-4191-4c0d-93ca-b8b8e73448b8" 00:15:40.666 ], 00:15:40.666 "product_name": "Malloc disk", 00:15:40.666 "block_size": 512, 00:15:40.666 "num_blocks": 65536, 00:15:40.666 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:40.666 "assigned_rate_limits": { 00:15:40.666 "rw_ios_per_sec": 0, 00:15:40.666 "rw_mbytes_per_sec": 0, 00:15:40.666 "r_mbytes_per_sec": 0, 00:15:40.666 "w_mbytes_per_sec": 0 00:15:40.666 }, 00:15:40.666 "claimed": false, 00:15:40.666 "zoned": false, 00:15:40.666 "supported_io_types": { 00:15:40.666 "read": true, 00:15:40.666 "write": true, 00:15:40.666 "unmap": true, 00:15:40.666 "flush": true, 00:15:40.666 "reset": true, 00:15:40.666 "nvme_admin": false, 00:15:40.666 "nvme_io": false, 00:15:40.667 "nvme_io_md": false, 00:15:40.667 "write_zeroes": true, 00:15:40.667 "zcopy": true, 00:15:40.667 "get_zone_info": false, 00:15:40.667 "zone_management": false, 00:15:40.667 "zone_append": false, 00:15:40.667 "compare": false, 00:15:40.667 "compare_and_write": false, 00:15:40.667 "abort": true, 00:15:40.667 "seek_hole": false, 00:15:40.667 "seek_data": false, 00:15:40.667 "copy": true, 00:15:40.667 "nvme_iov_md": false 00:15:40.667 }, 00:15:40.667 "memory_domains": [ 00:15:40.667 { 00:15:40.667 "dma_device_id": "system", 00:15:40.667 "dma_device_type": 1 00:15:40.667 }, 00:15:40.667 { 00:15:40.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.667 "dma_device_type": 2 00:15:40.667 } 00:15:40.667 ], 00:15:40.667 "driver_specific": {} 00:15:40.667 } 00:15:40.667 ] 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.667 [2024-12-06 13:09:46.989126] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.667 [2024-12-06 13:09:46.989203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.667 [2024-12-06 13:09:46.989236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.667 [2024-12-06 13:09:46.991828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.667 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.667 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.667 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.667 "name": "Existed_Raid", 00:15:40.667 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:40.667 "strip_size_kb": 64, 00:15:40.667 "state": "configuring", 00:15:40.667 "raid_level": "concat", 00:15:40.667 "superblock": true, 00:15:40.667 "num_base_bdevs": 3, 00:15:40.667 "num_base_bdevs_discovered": 2, 00:15:40.667 "num_base_bdevs_operational": 3, 00:15:40.667 "base_bdevs_list": [ 00:15:40.667 { 00:15:40.667 "name": "BaseBdev1", 00:15:40.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.667 "is_configured": false, 00:15:40.667 "data_offset": 0, 00:15:40.667 "data_size": 0 00:15:40.667 }, 00:15:40.667 { 00:15:40.667 "name": "BaseBdev2", 00:15:40.667 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:40.667 "is_configured": true, 00:15:40.667 "data_offset": 2048, 00:15:40.667 "data_size": 63488 00:15:40.667 }, 00:15:40.667 { 00:15:40.667 "name": "BaseBdev3", 00:15:40.667 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:40.667 "is_configured": true, 00:15:40.667 "data_offset": 2048, 00:15:40.667 "data_size": 63488 00:15:40.667 } 00:15:40.667 ] 00:15:40.667 }' 00:15:40.667 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.667 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 [2024-12-06 13:09:47.537351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.234 "name": "Existed_Raid", 00:15:41.234 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:41.234 "strip_size_kb": 64, 00:15:41.234 "state": "configuring", 00:15:41.234 "raid_level": "concat", 00:15:41.234 "superblock": true, 00:15:41.234 "num_base_bdevs": 3, 00:15:41.234 "num_base_bdevs_discovered": 1, 00:15:41.234 "num_base_bdevs_operational": 3, 00:15:41.234 "base_bdevs_list": [ 00:15:41.234 { 00:15:41.234 "name": "BaseBdev1", 00:15:41.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.234 "is_configured": false, 00:15:41.234 "data_offset": 0, 00:15:41.234 "data_size": 0 00:15:41.234 }, 00:15:41.234 { 00:15:41.234 "name": null, 00:15:41.234 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:41.234 "is_configured": false, 00:15:41.234 "data_offset": 0, 00:15:41.234 "data_size": 63488 00:15:41.234 }, 00:15:41.234 { 00:15:41.234 "name": "BaseBdev3", 00:15:41.234 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:41.234 "is_configured": true, 00:15:41.234 "data_offset": 2048, 00:15:41.234 "data_size": 63488 00:15:41.234 } 00:15:41.234 ] 00:15:41.234 }' 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.234 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 [2024-12-06 13:09:48.130843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.803 BaseBdev1 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.803 [ 00:15:41.803 { 00:15:41.803 "name": "BaseBdev1", 00:15:41.803 "aliases": [ 00:15:41.803 "30e65b1d-f010-46fb-8456-b4790ba8226f" 00:15:41.803 ], 00:15:41.803 "product_name": "Malloc disk", 00:15:41.803 "block_size": 512, 00:15:41.803 "num_blocks": 65536, 00:15:41.803 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:41.803 "assigned_rate_limits": { 00:15:41.803 "rw_ios_per_sec": 0, 00:15:41.803 "rw_mbytes_per_sec": 0, 00:15:41.803 "r_mbytes_per_sec": 0, 00:15:41.803 "w_mbytes_per_sec": 0 00:15:41.803 }, 00:15:41.803 "claimed": true, 00:15:41.803 "claim_type": "exclusive_write", 00:15:41.803 "zoned": false, 00:15:41.803 "supported_io_types": { 00:15:41.803 "read": true, 00:15:41.803 "write": true, 00:15:41.803 "unmap": true, 00:15:41.803 "flush": true, 00:15:41.803 "reset": true, 00:15:41.803 "nvme_admin": false, 00:15:41.803 "nvme_io": false, 00:15:41.803 "nvme_io_md": false, 00:15:41.803 "write_zeroes": true, 00:15:41.803 "zcopy": true, 00:15:41.803 "get_zone_info": false, 00:15:41.803 "zone_management": false, 00:15:41.803 "zone_append": false, 00:15:41.803 "compare": false, 00:15:41.803 "compare_and_write": false, 00:15:41.803 "abort": true, 00:15:41.803 "seek_hole": false, 00:15:41.803 "seek_data": false, 00:15:41.803 "copy": true, 00:15:41.803 "nvme_iov_md": false 00:15:41.803 }, 00:15:41.803 "memory_domains": [ 00:15:41.803 { 00:15:41.803 "dma_device_id": "system", 00:15:41.803 "dma_device_type": 1 00:15:41.803 }, 00:15:41.803 { 00:15:41.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.803 "dma_device_type": 2 00:15:41.803 } 00:15:41.803 ], 00:15:41.803 "driver_specific": {} 00:15:41.803 } 00:15:41.803 ] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.803 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.804 "name": "Existed_Raid", 00:15:41.804 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:41.804 "strip_size_kb": 64, 00:15:41.804 "state": "configuring", 00:15:41.804 "raid_level": "concat", 00:15:41.804 "superblock": true, 00:15:41.804 "num_base_bdevs": 3, 00:15:41.804 "num_base_bdevs_discovered": 2, 00:15:41.804 "num_base_bdevs_operational": 3, 00:15:41.804 "base_bdevs_list": [ 00:15:41.804 { 00:15:41.804 "name": "BaseBdev1", 00:15:41.804 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:41.804 "is_configured": true, 00:15:41.804 "data_offset": 2048, 00:15:41.804 "data_size": 63488 00:15:41.804 }, 00:15:41.804 { 00:15:41.804 "name": null, 00:15:41.804 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:41.804 "is_configured": false, 00:15:41.804 "data_offset": 0, 00:15:41.804 "data_size": 63488 00:15:41.804 }, 00:15:41.804 { 00:15:41.804 "name": "BaseBdev3", 00:15:41.804 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:41.804 "is_configured": true, 00:15:41.804 "data_offset": 2048, 00:15:41.804 "data_size": 63488 00:15:41.804 } 00:15:41.804 ] 00:15:41.804 }' 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.804 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 [2024-12-06 13:09:48.723036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.372 "name": "Existed_Raid", 00:15:42.372 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:42.372 "strip_size_kb": 64, 00:15:42.372 "state": "configuring", 00:15:42.372 "raid_level": "concat", 00:15:42.372 "superblock": true, 00:15:42.372 "num_base_bdevs": 3, 00:15:42.372 "num_base_bdevs_discovered": 1, 00:15:42.372 "num_base_bdevs_operational": 3, 00:15:42.372 "base_bdevs_list": [ 00:15:42.372 { 00:15:42.372 "name": "BaseBdev1", 00:15:42.372 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:42.372 "is_configured": true, 00:15:42.372 "data_offset": 2048, 00:15:42.372 "data_size": 63488 00:15:42.372 }, 00:15:42.372 { 00:15:42.372 "name": null, 00:15:42.372 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:42.372 "is_configured": false, 00:15:42.372 "data_offset": 0, 00:15:42.372 "data_size": 63488 00:15:42.372 }, 00:15:42.372 { 00:15:42.372 "name": null, 00:15:42.372 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:42.372 "is_configured": false, 00:15:42.372 "data_offset": 0, 00:15:42.372 "data_size": 63488 00:15:42.372 } 00:15:42.372 ] 00:15:42.372 }' 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.372 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 [2024-12-06 13:09:49.295281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.942 "name": "Existed_Raid", 00:15:42.942 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:42.942 "strip_size_kb": 64, 00:15:42.942 "state": "configuring", 00:15:42.942 "raid_level": "concat", 00:15:42.942 "superblock": true, 00:15:42.942 "num_base_bdevs": 3, 00:15:42.942 "num_base_bdevs_discovered": 2, 00:15:42.942 "num_base_bdevs_operational": 3, 00:15:42.942 "base_bdevs_list": [ 00:15:42.942 { 00:15:42.942 "name": "BaseBdev1", 00:15:42.942 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:42.942 "is_configured": true, 00:15:42.942 "data_offset": 2048, 00:15:42.942 "data_size": 63488 00:15:42.942 }, 00:15:42.942 { 00:15:42.942 "name": null, 00:15:42.942 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:42.942 "is_configured": false, 00:15:42.942 "data_offset": 0, 00:15:42.942 "data_size": 63488 00:15:42.942 }, 00:15:42.942 { 00:15:42.942 "name": "BaseBdev3", 00:15:42.942 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:42.942 "is_configured": true, 00:15:42.942 "data_offset": 2048, 00:15:42.942 "data_size": 63488 00:15:42.942 } 00:15:42.942 ] 00:15:42.942 }' 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.942 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.544 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.544 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.545 [2024-12-06 13:09:49.899494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.545 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.545 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.545 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.545 "name": "Existed_Raid", 00:15:43.545 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:43.545 "strip_size_kb": 64, 00:15:43.545 "state": "configuring", 00:15:43.545 "raid_level": "concat", 00:15:43.545 "superblock": true, 00:15:43.545 "num_base_bdevs": 3, 00:15:43.545 "num_base_bdevs_discovered": 1, 00:15:43.545 "num_base_bdevs_operational": 3, 00:15:43.545 "base_bdevs_list": [ 00:15:43.545 { 00:15:43.545 "name": null, 00:15:43.545 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:43.545 "is_configured": false, 00:15:43.545 "data_offset": 0, 00:15:43.545 "data_size": 63488 00:15:43.545 }, 00:15:43.545 { 00:15:43.545 "name": null, 00:15:43.545 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:43.545 "is_configured": false, 00:15:43.545 "data_offset": 0, 00:15:43.545 "data_size": 63488 00:15:43.545 }, 00:15:43.545 { 00:15:43.545 "name": "BaseBdev3", 00:15:43.545 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:43.545 "is_configured": true, 00:15:43.545 "data_offset": 2048, 00:15:43.545 "data_size": 63488 00:15:43.545 } 00:15:43.545 ] 00:15:43.545 }' 00:15:43.545 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.545 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.114 [2024-12-06 13:09:50.612353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.114 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.373 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.373 "name": "Existed_Raid", 00:15:44.373 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:44.373 "strip_size_kb": 64, 00:15:44.373 "state": "configuring", 00:15:44.373 "raid_level": "concat", 00:15:44.373 "superblock": true, 00:15:44.373 "num_base_bdevs": 3, 00:15:44.373 "num_base_bdevs_discovered": 2, 00:15:44.373 "num_base_bdevs_operational": 3, 00:15:44.373 "base_bdevs_list": [ 00:15:44.373 { 00:15:44.373 "name": null, 00:15:44.373 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:44.373 "is_configured": false, 00:15:44.373 "data_offset": 0, 00:15:44.373 "data_size": 63488 00:15:44.373 }, 00:15:44.373 { 00:15:44.373 "name": "BaseBdev2", 00:15:44.373 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:44.373 "is_configured": true, 00:15:44.373 "data_offset": 2048, 00:15:44.373 "data_size": 63488 00:15:44.373 }, 00:15:44.373 { 00:15:44.373 "name": "BaseBdev3", 00:15:44.373 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:44.373 "is_configured": true, 00:15:44.373 "data_offset": 2048, 00:15:44.373 "data_size": 63488 00:15:44.373 } 00:15:44.373 ] 00:15:44.373 }' 00:15:44.373 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.373 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.635 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.635 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.635 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.635 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 30e65b1d-f010-46fb-8456-b4790ba8226f 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.894 [2024-12-06 13:09:51.305869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:44.894 [2024-12-06 13:09:51.306186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:44.894 [2024-12-06 13:09:51.306210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.894 [2024-12-06 13:09:51.306583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:44.894 [2024-12-06 13:09:51.306800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:44.894 [2024-12-06 13:09:51.306822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:44.894 [2024-12-06 13:09:51.306992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.894 NewBaseBdev 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.894 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.895 [ 00:15:44.895 { 00:15:44.895 "name": "NewBaseBdev", 00:15:44.895 "aliases": [ 00:15:44.895 "30e65b1d-f010-46fb-8456-b4790ba8226f" 00:15:44.895 ], 00:15:44.895 "product_name": "Malloc disk", 00:15:44.895 "block_size": 512, 00:15:44.895 "num_blocks": 65536, 00:15:44.895 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:44.895 "assigned_rate_limits": { 00:15:44.895 "rw_ios_per_sec": 0, 00:15:44.895 "rw_mbytes_per_sec": 0, 00:15:44.895 "r_mbytes_per_sec": 0, 00:15:44.895 "w_mbytes_per_sec": 0 00:15:44.895 }, 00:15:44.895 "claimed": true, 00:15:44.895 "claim_type": "exclusive_write", 00:15:44.895 "zoned": false, 00:15:44.895 "supported_io_types": { 00:15:44.895 "read": true, 00:15:44.895 "write": true, 00:15:44.895 "unmap": true, 00:15:44.895 "flush": true, 00:15:44.895 "reset": true, 00:15:44.895 "nvme_admin": false, 00:15:44.895 "nvme_io": false, 00:15:44.895 "nvme_io_md": false, 00:15:44.895 "write_zeroes": true, 00:15:44.895 "zcopy": true, 00:15:44.895 "get_zone_info": false, 00:15:44.895 "zone_management": false, 00:15:44.895 "zone_append": false, 00:15:44.895 "compare": false, 00:15:44.895 "compare_and_write": false, 00:15:44.895 "abort": true, 00:15:44.895 "seek_hole": false, 00:15:44.895 "seek_data": false, 00:15:44.895 "copy": true, 00:15:44.895 "nvme_iov_md": false 00:15:44.895 }, 00:15:44.895 "memory_domains": [ 00:15:44.895 { 00:15:44.895 "dma_device_id": "system", 00:15:44.895 "dma_device_type": 1 00:15:44.895 }, 00:15:44.895 { 00:15:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.895 "dma_device_type": 2 00:15:44.895 } 00:15:44.895 ], 00:15:44.895 "driver_specific": {} 00:15:44.895 } 00:15:44.895 ] 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.895 "name": "Existed_Raid", 00:15:44.895 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:44.895 "strip_size_kb": 64, 00:15:44.895 "state": "online", 00:15:44.895 "raid_level": "concat", 00:15:44.895 "superblock": true, 00:15:44.895 "num_base_bdevs": 3, 00:15:44.895 "num_base_bdevs_discovered": 3, 00:15:44.895 "num_base_bdevs_operational": 3, 00:15:44.895 "base_bdevs_list": [ 00:15:44.895 { 00:15:44.895 "name": "NewBaseBdev", 00:15:44.895 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:44.895 "is_configured": true, 00:15:44.895 "data_offset": 2048, 00:15:44.895 "data_size": 63488 00:15:44.895 }, 00:15:44.895 { 00:15:44.895 "name": "BaseBdev2", 00:15:44.895 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:44.895 "is_configured": true, 00:15:44.895 "data_offset": 2048, 00:15:44.895 "data_size": 63488 00:15:44.895 }, 00:15:44.895 { 00:15:44.895 "name": "BaseBdev3", 00:15:44.895 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:44.895 "is_configured": true, 00:15:44.895 "data_offset": 2048, 00:15:44.895 "data_size": 63488 00:15:44.895 } 00:15:44.895 ] 00:15:44.895 }' 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.895 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.460 [2024-12-06 13:09:51.870595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.460 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.460 "name": "Existed_Raid", 00:15:45.460 "aliases": [ 00:15:45.460 "c1e34ef3-8e4b-4640-93bc-6df66881bebd" 00:15:45.460 ], 00:15:45.460 "product_name": "Raid Volume", 00:15:45.460 "block_size": 512, 00:15:45.460 "num_blocks": 190464, 00:15:45.460 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:45.460 "assigned_rate_limits": { 00:15:45.460 "rw_ios_per_sec": 0, 00:15:45.460 "rw_mbytes_per_sec": 0, 00:15:45.460 "r_mbytes_per_sec": 0, 00:15:45.460 "w_mbytes_per_sec": 0 00:15:45.460 }, 00:15:45.460 "claimed": false, 00:15:45.460 "zoned": false, 00:15:45.460 "supported_io_types": { 00:15:45.460 "read": true, 00:15:45.461 "write": true, 00:15:45.461 "unmap": true, 00:15:45.461 "flush": true, 00:15:45.461 "reset": true, 00:15:45.461 "nvme_admin": false, 00:15:45.461 "nvme_io": false, 00:15:45.461 "nvme_io_md": false, 00:15:45.461 "write_zeroes": true, 00:15:45.461 "zcopy": false, 00:15:45.461 "get_zone_info": false, 00:15:45.461 "zone_management": false, 00:15:45.461 "zone_append": false, 00:15:45.461 "compare": false, 00:15:45.461 "compare_and_write": false, 00:15:45.461 "abort": false, 00:15:45.461 "seek_hole": false, 00:15:45.461 "seek_data": false, 00:15:45.461 "copy": false, 00:15:45.461 "nvme_iov_md": false 00:15:45.461 }, 00:15:45.461 "memory_domains": [ 00:15:45.461 { 00:15:45.461 "dma_device_id": "system", 00:15:45.461 "dma_device_type": 1 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.461 "dma_device_type": 2 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "dma_device_id": "system", 00:15:45.461 "dma_device_type": 1 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.461 "dma_device_type": 2 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "dma_device_id": "system", 00:15:45.461 "dma_device_type": 1 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.461 "dma_device_type": 2 00:15:45.461 } 00:15:45.461 ], 00:15:45.461 "driver_specific": { 00:15:45.461 "raid": { 00:15:45.461 "uuid": "c1e34ef3-8e4b-4640-93bc-6df66881bebd", 00:15:45.461 "strip_size_kb": 64, 00:15:45.461 "state": "online", 00:15:45.461 "raid_level": "concat", 00:15:45.461 "superblock": true, 00:15:45.461 "num_base_bdevs": 3, 00:15:45.461 "num_base_bdevs_discovered": 3, 00:15:45.461 "num_base_bdevs_operational": 3, 00:15:45.461 "base_bdevs_list": [ 00:15:45.461 { 00:15:45.461 "name": "NewBaseBdev", 00:15:45.461 "uuid": "30e65b1d-f010-46fb-8456-b4790ba8226f", 00:15:45.461 "is_configured": true, 00:15:45.461 "data_offset": 2048, 00:15:45.461 "data_size": 63488 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "name": "BaseBdev2", 00:15:45.461 "uuid": "106072fd-e77f-4318-a696-fae08a2b3166", 00:15:45.461 "is_configured": true, 00:15:45.461 "data_offset": 2048, 00:15:45.461 "data_size": 63488 00:15:45.461 }, 00:15:45.461 { 00:15:45.461 "name": "BaseBdev3", 00:15:45.461 "uuid": "69b043c5-4191-4c0d-93ca-b8b8e73448b8", 00:15:45.461 "is_configured": true, 00:15:45.461 "data_offset": 2048, 00:15:45.461 "data_size": 63488 00:15:45.461 } 00:15:45.461 ] 00:15:45.461 } 00:15:45.461 } 00:15:45.461 }' 00:15:45.461 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.461 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:45.461 BaseBdev2 00:15:45.461 BaseBdev3' 00:15:45.461 13:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.719 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.720 [2024-12-06 13:09:52.190231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.720 [2024-12-06 13:09:52.190270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.720 [2024-12-06 13:09:52.190408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.720 [2024-12-06 13:09:52.190508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.720 [2024-12-06 13:09:52.190536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66515 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66515 ']' 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66515 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66515 00:15:45.720 killing process with pid 66515 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66515' 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66515 00:15:45.720 [2024-12-06 13:09:52.230469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.720 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66515 00:15:46.284 [2024-12-06 13:09:52.518401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.219 13:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:47.219 00:15:47.219 real 0m12.073s 00:15:47.219 user 0m19.850s 00:15:47.219 sys 0m1.731s 00:15:47.219 13:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.219 ************************************ 00:15:47.219 END TEST raid_state_function_test_sb 00:15:47.219 ************************************ 00:15:47.219 13:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.219 13:09:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:47.219 13:09:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:47.219 13:09:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.219 13:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.219 ************************************ 00:15:47.219 START TEST raid_superblock_test 00:15:47.219 ************************************ 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67152 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67152 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67152 ']' 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.219 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.477 [2024-12-06 13:09:53.812277] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:47.477 [2024-12-06 13:09:53.812434] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67152 ] 00:15:47.477 [2024-12-06 13:09:53.986373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.741 [2024-12-06 13:09:54.120267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.004 [2024-12-06 13:09:54.325418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.004 [2024-12-06 13:09:54.325693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 malloc1 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 [2024-12-06 13:09:54.865361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.569 [2024-12-06 13:09:54.865597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.569 [2024-12-06 13:09:54.865774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.569 [2024-12-06 13:09:54.865913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.569 [2024-12-06 13:09:54.868979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.569 [2024-12-06 13:09:54.869025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.569 pt1 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 malloc2 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 [2024-12-06 13:09:54.924874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.569 [2024-12-06 13:09:54.924964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.569 [2024-12-06 13:09:54.925001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.569 [2024-12-06 13:09:54.925016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.569 [2024-12-06 13:09:54.928024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.569 [2024-12-06 13:09:54.928195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.569 pt2 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 malloc3 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 [2024-12-06 13:09:54.999432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.569 [2024-12-06 13:09:54.999642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.569 [2024-12-06 13:09:54.999722] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.569 [2024-12-06 13:09:54.999881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.569 [2024-12-06 13:09:55.002914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.569 [2024-12-06 13:09:55.003072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.569 pt3 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 [2024-12-06 13:09:55.011548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.569 [2024-12-06 13:09:55.014081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.569 [2024-12-06 13:09:55.014203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.569 [2024-12-06 13:09:55.014442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.569 [2024-12-06 13:09:55.014489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.569 [2024-12-06 13:09:55.014819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:48.569 [2024-12-06 13:09:55.015034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.569 [2024-12-06 13:09:55.015050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.569 [2024-12-06 13:09:55.015240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.569 "name": "raid_bdev1", 00:15:48.569 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:48.569 "strip_size_kb": 64, 00:15:48.569 "state": "online", 00:15:48.569 "raid_level": "concat", 00:15:48.569 "superblock": true, 00:15:48.569 "num_base_bdevs": 3, 00:15:48.569 "num_base_bdevs_discovered": 3, 00:15:48.569 "num_base_bdevs_operational": 3, 00:15:48.569 "base_bdevs_list": [ 00:15:48.569 { 00:15:48.569 "name": "pt1", 00:15:48.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.569 "is_configured": true, 00:15:48.569 "data_offset": 2048, 00:15:48.569 "data_size": 63488 00:15:48.569 }, 00:15:48.569 { 00:15:48.569 "name": "pt2", 00:15:48.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.569 "is_configured": true, 00:15:48.569 "data_offset": 2048, 00:15:48.569 "data_size": 63488 00:15:48.569 }, 00:15:48.569 { 00:15:48.569 "name": "pt3", 00:15:48.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.569 "is_configured": true, 00:15:48.569 "data_offset": 2048, 00:15:48.569 "data_size": 63488 00:15:48.569 } 00:15:48.569 ] 00:15:48.569 }' 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.569 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.133 [2024-12-06 13:09:55.516059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.133 "name": "raid_bdev1", 00:15:49.133 "aliases": [ 00:15:49.133 "f6c8f445-fae8-4650-b3c9-12f07d770e8b" 00:15:49.133 ], 00:15:49.133 "product_name": "Raid Volume", 00:15:49.133 "block_size": 512, 00:15:49.133 "num_blocks": 190464, 00:15:49.133 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:49.133 "assigned_rate_limits": { 00:15:49.133 "rw_ios_per_sec": 0, 00:15:49.133 "rw_mbytes_per_sec": 0, 00:15:49.133 "r_mbytes_per_sec": 0, 00:15:49.133 "w_mbytes_per_sec": 0 00:15:49.133 }, 00:15:49.133 "claimed": false, 00:15:49.133 "zoned": false, 00:15:49.133 "supported_io_types": { 00:15:49.133 "read": true, 00:15:49.133 "write": true, 00:15:49.133 "unmap": true, 00:15:49.133 "flush": true, 00:15:49.133 "reset": true, 00:15:49.133 "nvme_admin": false, 00:15:49.133 "nvme_io": false, 00:15:49.133 "nvme_io_md": false, 00:15:49.133 "write_zeroes": true, 00:15:49.133 "zcopy": false, 00:15:49.133 "get_zone_info": false, 00:15:49.133 "zone_management": false, 00:15:49.133 "zone_append": false, 00:15:49.133 "compare": false, 00:15:49.133 "compare_and_write": false, 00:15:49.133 "abort": false, 00:15:49.133 "seek_hole": false, 00:15:49.133 "seek_data": false, 00:15:49.133 "copy": false, 00:15:49.133 "nvme_iov_md": false 00:15:49.133 }, 00:15:49.133 "memory_domains": [ 00:15:49.133 { 00:15:49.133 "dma_device_id": "system", 00:15:49.133 "dma_device_type": 1 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.133 "dma_device_type": 2 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "dma_device_id": "system", 00:15:49.133 "dma_device_type": 1 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.133 "dma_device_type": 2 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "dma_device_id": "system", 00:15:49.133 "dma_device_type": 1 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.133 "dma_device_type": 2 00:15:49.133 } 00:15:49.133 ], 00:15:49.133 "driver_specific": { 00:15:49.133 "raid": { 00:15:49.133 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:49.133 "strip_size_kb": 64, 00:15:49.133 "state": "online", 00:15:49.133 "raid_level": "concat", 00:15:49.133 "superblock": true, 00:15:49.133 "num_base_bdevs": 3, 00:15:49.133 "num_base_bdevs_discovered": 3, 00:15:49.133 "num_base_bdevs_operational": 3, 00:15:49.133 "base_bdevs_list": [ 00:15:49.133 { 00:15:49.133 "name": "pt1", 00:15:49.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.133 "is_configured": true, 00:15:49.133 "data_offset": 2048, 00:15:49.133 "data_size": 63488 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "name": "pt2", 00:15:49.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.133 "is_configured": true, 00:15:49.133 "data_offset": 2048, 00:15:49.133 "data_size": 63488 00:15:49.133 }, 00:15:49.133 { 00:15:49.133 "name": "pt3", 00:15:49.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.133 "is_configured": true, 00:15:49.133 "data_offset": 2048, 00:15:49.133 "data_size": 63488 00:15:49.133 } 00:15:49.133 ] 00:15:49.133 } 00:15:49.133 } 00:15:49.133 }' 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.133 pt2 00:15:49.133 pt3' 00:15:49.133 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 [2024-12-06 13:09:55.852131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.390 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f6c8f445-fae8-4650-b3c9-12f07d770e8b 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f6c8f445-fae8-4650-b3c9-12f07d770e8b ']' 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.391 [2024-12-06 13:09:55.907767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.391 [2024-12-06 13:09:55.907818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.391 [2024-12-06 13:09:55.907935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.391 [2024-12-06 13:09:55.908034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.391 [2024-12-06 13:09:55.908051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.391 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 [2024-12-06 13:09:56.059891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:49.649 [2024-12-06 13:09:56.062602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:49.649 [2024-12-06 13:09:56.062690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:49.649 [2024-12-06 13:09:56.062772] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:49.649 [2024-12-06 13:09:56.062855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:49.649 [2024-12-06 13:09:56.062890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:49.649 [2024-12-06 13:09:56.062917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.649 [2024-12-06 13:09:56.062931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:49.649 request: 00:15:49.649 { 00:15:49.649 "name": "raid_bdev1", 00:15:49.649 "raid_level": "concat", 00:15:49.649 "base_bdevs": [ 00:15:49.649 "malloc1", 00:15:49.649 "malloc2", 00:15:49.649 "malloc3" 00:15:49.649 ], 00:15:49.649 "strip_size_kb": 64, 00:15:49.649 "superblock": false, 00:15:49.649 "method": "bdev_raid_create", 00:15:49.649 "req_id": 1 00:15:49.649 } 00:15:49.649 Got JSON-RPC error response 00:15:49.649 response: 00:15:49.649 { 00:15:49.649 "code": -17, 00:15:49.649 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:49.649 } 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 [2024-12-06 13:09:56.135856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.649 [2024-12-06 13:09:56.135952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.649 [2024-12-06 13:09:56.135988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:49.649 [2024-12-06 13:09:56.136004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.649 [2024-12-06 13:09:56.139167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.649 [2024-12-06 13:09:56.139210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.649 [2024-12-06 13:09:56.139334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.649 [2024-12-06 13:09:56.139409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.649 pt1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.649 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.907 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.907 "name": "raid_bdev1", 00:15:49.907 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:49.907 "strip_size_kb": 64, 00:15:49.907 "state": "configuring", 00:15:49.907 "raid_level": "concat", 00:15:49.907 "superblock": true, 00:15:49.907 "num_base_bdevs": 3, 00:15:49.907 "num_base_bdevs_discovered": 1, 00:15:49.907 "num_base_bdevs_operational": 3, 00:15:49.907 "base_bdevs_list": [ 00:15:49.907 { 00:15:49.907 "name": "pt1", 00:15:49.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.907 "is_configured": true, 00:15:49.907 "data_offset": 2048, 00:15:49.907 "data_size": 63488 00:15:49.907 }, 00:15:49.907 { 00:15:49.907 "name": null, 00:15:49.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.907 "is_configured": false, 00:15:49.907 "data_offset": 2048, 00:15:49.907 "data_size": 63488 00:15:49.907 }, 00:15:49.907 { 00:15:49.907 "name": null, 00:15:49.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.907 "is_configured": false, 00:15:49.907 "data_offset": 2048, 00:15:49.907 "data_size": 63488 00:15:49.907 } 00:15:49.907 ] 00:15:49.907 }' 00:15:49.907 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.907 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.165 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:50.165 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.166 [2024-12-06 13:09:56.672016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.166 [2024-12-06 13:09:56.672136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.166 [2024-12-06 13:09:56.672178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:50.166 [2024-12-06 13:09:56.672195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.166 [2024-12-06 13:09:56.672839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.166 [2024-12-06 13:09:56.672878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.166 [2024-12-06 13:09:56.673004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.166 [2024-12-06 13:09:56.673046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.166 pt2 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.166 [2024-12-06 13:09:56.679964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.166 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.424 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.424 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.424 "name": "raid_bdev1", 00:15:50.424 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:50.424 "strip_size_kb": 64, 00:15:50.424 "state": "configuring", 00:15:50.424 "raid_level": "concat", 00:15:50.424 "superblock": true, 00:15:50.424 "num_base_bdevs": 3, 00:15:50.424 "num_base_bdevs_discovered": 1, 00:15:50.424 "num_base_bdevs_operational": 3, 00:15:50.424 "base_bdevs_list": [ 00:15:50.424 { 00:15:50.424 "name": "pt1", 00:15:50.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.424 "is_configured": true, 00:15:50.424 "data_offset": 2048, 00:15:50.424 "data_size": 63488 00:15:50.424 }, 00:15:50.424 { 00:15:50.424 "name": null, 00:15:50.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.424 "is_configured": false, 00:15:50.424 "data_offset": 0, 00:15:50.424 "data_size": 63488 00:15:50.424 }, 00:15:50.424 { 00:15:50.424 "name": null, 00:15:50.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.424 "is_configured": false, 00:15:50.424 "data_offset": 2048, 00:15:50.424 "data_size": 63488 00:15:50.424 } 00:15:50.424 ] 00:15:50.424 }' 00:15:50.424 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.424 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.682 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:50.682 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.682 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.682 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.682 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.682 [2024-12-06 13:09:57.204112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.682 [2024-12-06 13:09:57.204220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.683 [2024-12-06 13:09:57.204252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:50.683 [2024-12-06 13:09:57.204272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.683 [2024-12-06 13:09:57.205041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.683 [2024-12-06 13:09:57.205095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.683 [2024-12-06 13:09:57.205211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.683 [2024-12-06 13:09:57.205252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.016 pt2 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.016 [2024-12-06 13:09:57.216062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.016 [2024-12-06 13:09:57.216119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.016 [2024-12-06 13:09:57.216143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:51.016 [2024-12-06 13:09:57.216160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.016 [2024-12-06 13:09:57.216692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.016 [2024-12-06 13:09:57.216743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.016 [2024-12-06 13:09:57.216823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.016 [2024-12-06 13:09:57.216856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.016 [2024-12-06 13:09:57.217047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.016 [2024-12-06 13:09:57.217089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:51.016 [2024-12-06 13:09:57.217516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:51.016 [2024-12-06 13:09:57.217757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.016 [2024-12-06 13:09:57.217789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:51.016 [2024-12-06 13:09:57.218025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.016 pt3 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.016 "name": "raid_bdev1", 00:15:51.016 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:51.016 "strip_size_kb": 64, 00:15:51.016 "state": "online", 00:15:51.016 "raid_level": "concat", 00:15:51.016 "superblock": true, 00:15:51.016 "num_base_bdevs": 3, 00:15:51.016 "num_base_bdevs_discovered": 3, 00:15:51.016 "num_base_bdevs_operational": 3, 00:15:51.016 "base_bdevs_list": [ 00:15:51.016 { 00:15:51.016 "name": "pt1", 00:15:51.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.016 "is_configured": true, 00:15:51.016 "data_offset": 2048, 00:15:51.016 "data_size": 63488 00:15:51.016 }, 00:15:51.016 { 00:15:51.016 "name": "pt2", 00:15:51.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.016 "is_configured": true, 00:15:51.016 "data_offset": 2048, 00:15:51.016 "data_size": 63488 00:15:51.016 }, 00:15:51.016 { 00:15:51.016 "name": "pt3", 00:15:51.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.016 "is_configured": true, 00:15:51.016 "data_offset": 2048, 00:15:51.016 "data_size": 63488 00:15:51.016 } 00:15:51.016 ] 00:15:51.016 }' 00:15:51.016 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.017 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.304 [2024-12-06 13:09:57.732701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.304 "name": "raid_bdev1", 00:15:51.304 "aliases": [ 00:15:51.304 "f6c8f445-fae8-4650-b3c9-12f07d770e8b" 00:15:51.304 ], 00:15:51.304 "product_name": "Raid Volume", 00:15:51.304 "block_size": 512, 00:15:51.304 "num_blocks": 190464, 00:15:51.304 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:51.304 "assigned_rate_limits": { 00:15:51.304 "rw_ios_per_sec": 0, 00:15:51.304 "rw_mbytes_per_sec": 0, 00:15:51.304 "r_mbytes_per_sec": 0, 00:15:51.304 "w_mbytes_per_sec": 0 00:15:51.304 }, 00:15:51.304 "claimed": false, 00:15:51.304 "zoned": false, 00:15:51.304 "supported_io_types": { 00:15:51.304 "read": true, 00:15:51.304 "write": true, 00:15:51.304 "unmap": true, 00:15:51.304 "flush": true, 00:15:51.304 "reset": true, 00:15:51.304 "nvme_admin": false, 00:15:51.304 "nvme_io": false, 00:15:51.304 "nvme_io_md": false, 00:15:51.304 "write_zeroes": true, 00:15:51.304 "zcopy": false, 00:15:51.304 "get_zone_info": false, 00:15:51.304 "zone_management": false, 00:15:51.304 "zone_append": false, 00:15:51.304 "compare": false, 00:15:51.304 "compare_and_write": false, 00:15:51.304 "abort": false, 00:15:51.304 "seek_hole": false, 00:15:51.304 "seek_data": false, 00:15:51.304 "copy": false, 00:15:51.304 "nvme_iov_md": false 00:15:51.304 }, 00:15:51.304 "memory_domains": [ 00:15:51.304 { 00:15:51.304 "dma_device_id": "system", 00:15:51.304 "dma_device_type": 1 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.304 "dma_device_type": 2 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "dma_device_id": "system", 00:15:51.304 "dma_device_type": 1 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.304 "dma_device_type": 2 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "dma_device_id": "system", 00:15:51.304 "dma_device_type": 1 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.304 "dma_device_type": 2 00:15:51.304 } 00:15:51.304 ], 00:15:51.304 "driver_specific": { 00:15:51.304 "raid": { 00:15:51.304 "uuid": "f6c8f445-fae8-4650-b3c9-12f07d770e8b", 00:15:51.304 "strip_size_kb": 64, 00:15:51.304 "state": "online", 00:15:51.304 "raid_level": "concat", 00:15:51.304 "superblock": true, 00:15:51.304 "num_base_bdevs": 3, 00:15:51.304 "num_base_bdevs_discovered": 3, 00:15:51.304 "num_base_bdevs_operational": 3, 00:15:51.304 "base_bdevs_list": [ 00:15:51.304 { 00:15:51.304 "name": "pt1", 00:15:51.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.304 "is_configured": true, 00:15:51.304 "data_offset": 2048, 00:15:51.304 "data_size": 63488 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "name": "pt2", 00:15:51.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.304 "is_configured": true, 00:15:51.304 "data_offset": 2048, 00:15:51.304 "data_size": 63488 00:15:51.304 }, 00:15:51.304 { 00:15:51.304 "name": "pt3", 00:15:51.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.304 "is_configured": true, 00:15:51.304 "data_offset": 2048, 00:15:51.304 "data_size": 63488 00:15:51.304 } 00:15:51.304 ] 00:15:51.304 } 00:15:51.304 } 00:15:51.304 }' 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:51.304 pt2 00:15:51.304 pt3' 00:15:51.304 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.563 13:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.563 [2024-12-06 13:09:58.036721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f6c8f445-fae8-4650-b3c9-12f07d770e8b '!=' f6c8f445-fae8-4650-b3c9-12f07d770e8b ']' 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67152 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67152 ']' 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67152 00:15:51.563 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67152 00:15:51.822 killing process with pid 67152 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67152' 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67152 00:15:51.822 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67152 00:15:51.822 [2024-12-06 13:09:58.114564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.822 [2024-12-06 13:09:58.114720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.822 [2024-12-06 13:09:58.114819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.822 [2024-12-06 13:09:58.114840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:52.081 [2024-12-06 13:09:58.408093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.456 13:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.456 00:15:53.456 real 0m5.824s 00:15:53.456 user 0m8.668s 00:15:53.456 sys 0m0.875s 00:15:53.456 13:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.456 13:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 ************************************ 00:15:53.456 END TEST raid_superblock_test 00:15:53.456 ************************************ 00:15:53.456 13:09:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:15:53.456 13:09:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.456 13:09:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.456 13:09:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 ************************************ 00:15:53.456 START TEST raid_read_error_test 00:15:53.456 ************************************ 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gBbgy5eoOh 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67416 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67416 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67416 ']' 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.456 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 [2024-12-06 13:09:59.724772] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:53.456 [2024-12-06 13:09:59.724955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67416 ] 00:15:53.457 [2024-12-06 13:09:59.915008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.716 [2024-12-06 13:10:00.085636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.976 [2024-12-06 13:10:00.313554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.976 [2024-12-06 13:10:00.313656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.235 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 BaseBdev1_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 true 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 [2024-12-06 13:10:00.797651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:54.495 [2024-12-06 13:10:00.797744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.495 [2024-12-06 13:10:00.797777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:54.495 [2024-12-06 13:10:00.797797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.495 [2024-12-06 13:10:00.800771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.495 [2024-12-06 13:10:00.800831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.495 BaseBdev1 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 BaseBdev2_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 true 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 [2024-12-06 13:10:00.857259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:54.495 [2024-12-06 13:10:00.857352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.495 [2024-12-06 13:10:00.857382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:54.495 [2024-12-06 13:10:00.857402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.495 [2024-12-06 13:10:00.860431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.495 [2024-12-06 13:10:00.860506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.495 BaseBdev2 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 BaseBdev3_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 true 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 [2024-12-06 13:10:00.933969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:54.495 [2024-12-06 13:10:00.934058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.495 [2024-12-06 13:10:00.934090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:54.495 [2024-12-06 13:10:00.934111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.495 [2024-12-06 13:10:00.937101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.495 [2024-12-06 13:10:00.937152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.495 BaseBdev3 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.495 [2024-12-06 13:10:00.942132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.495 [2024-12-06 13:10:00.944881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.495 [2024-12-06 13:10:00.944989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.495 [2024-12-06 13:10:00.945273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.495 [2024-12-06 13:10:00.945293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.495 [2024-12-06 13:10:00.945759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:54.495 [2024-12-06 13:10:00.946108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.495 [2024-12-06 13:10:00.946249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:54.495 [2024-12-06 13:10:00.946666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.495 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.496 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.496 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.496 "name": "raid_bdev1", 00:15:54.496 "uuid": "1c369d56-2f40-4c61-8c91-aab14e08ed9d", 00:15:54.496 "strip_size_kb": 64, 00:15:54.496 "state": "online", 00:15:54.496 "raid_level": "concat", 00:15:54.496 "superblock": true, 00:15:54.496 "num_base_bdevs": 3, 00:15:54.496 "num_base_bdevs_discovered": 3, 00:15:54.496 "num_base_bdevs_operational": 3, 00:15:54.496 "base_bdevs_list": [ 00:15:54.496 { 00:15:54.496 "name": "BaseBdev1", 00:15:54.496 "uuid": "69ca6d00-1c8b-5c99-b35a-8ca73f81103e", 00:15:54.496 "is_configured": true, 00:15:54.496 "data_offset": 2048, 00:15:54.496 "data_size": 63488 00:15:54.496 }, 00:15:54.496 { 00:15:54.496 "name": "BaseBdev2", 00:15:54.496 "uuid": "a206d94a-b3c3-5043-bd89-7a137adeb405", 00:15:54.496 "is_configured": true, 00:15:54.496 "data_offset": 2048, 00:15:54.496 "data_size": 63488 00:15:54.496 }, 00:15:54.496 { 00:15:54.496 "name": "BaseBdev3", 00:15:54.496 "uuid": "5c0caa75-51cf-5bed-9c05-a01576a42c1c", 00:15:54.496 "is_configured": true, 00:15:54.496 "data_offset": 2048, 00:15:54.496 "data_size": 63488 00:15:54.496 } 00:15:54.496 ] 00:15:54.496 }' 00:15:54.496 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.496 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.062 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:55.062 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:55.062 [2024-12-06 13:10:01.588336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.996 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.254 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.254 "name": "raid_bdev1", 00:15:56.254 "uuid": "1c369d56-2f40-4c61-8c91-aab14e08ed9d", 00:15:56.254 "strip_size_kb": 64, 00:15:56.254 "state": "online", 00:15:56.254 "raid_level": "concat", 00:15:56.254 "superblock": true, 00:15:56.254 "num_base_bdevs": 3, 00:15:56.254 "num_base_bdevs_discovered": 3, 00:15:56.254 "num_base_bdevs_operational": 3, 00:15:56.254 "base_bdevs_list": [ 00:15:56.254 { 00:15:56.254 "name": "BaseBdev1", 00:15:56.254 "uuid": "69ca6d00-1c8b-5c99-b35a-8ca73f81103e", 00:15:56.254 "is_configured": true, 00:15:56.254 "data_offset": 2048, 00:15:56.254 "data_size": 63488 00:15:56.254 }, 00:15:56.254 { 00:15:56.254 "name": "BaseBdev2", 00:15:56.254 "uuid": "a206d94a-b3c3-5043-bd89-7a137adeb405", 00:15:56.254 "is_configured": true, 00:15:56.254 "data_offset": 2048, 00:15:56.254 "data_size": 63488 00:15:56.254 }, 00:15:56.254 { 00:15:56.254 "name": "BaseBdev3", 00:15:56.254 "uuid": "5c0caa75-51cf-5bed-9c05-a01576a42c1c", 00:15:56.254 "is_configured": true, 00:15:56.254 "data_offset": 2048, 00:15:56.254 "data_size": 63488 00:15:56.254 } 00:15:56.254 ] 00:15:56.254 }' 00:15:56.254 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.254 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.512 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.512 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.512 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.512 [2024-12-06 13:10:02.998747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.512 [2024-12-06 13:10:02.999030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.512 [2024-12-06 13:10:03.002553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.512 [2024-12-06 13:10:03.002616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.512 [2024-12-06 13:10:03.002675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.512 [2024-12-06 13:10:03.002694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:56.512 { 00:15:56.512 "results": [ 00:15:56.512 { 00:15:56.512 "job": "raid_bdev1", 00:15:56.512 "core_mask": "0x1", 00:15:56.512 "workload": "randrw", 00:15:56.512 "percentage": 50, 00:15:56.512 "status": "finished", 00:15:56.512 "queue_depth": 1, 00:15:56.512 "io_size": 131072, 00:15:56.512 "runtime": 1.408141, 00:15:56.512 "iops": 9662.384661763275, 00:15:56.512 "mibps": 1207.7980827204094, 00:15:56.512 "io_failed": 1, 00:15:56.512 "io_timeout": 0, 00:15:56.512 "avg_latency_us": 145.1640169164267, 00:15:56.512 "min_latency_us": 43.75272727272727, 00:15:56.512 "max_latency_us": 1899.0545454545454 00:15:56.512 } 00:15:56.512 ], 00:15:56.512 "core_count": 1 00:15:56.512 } 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67416 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67416 ']' 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67416 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.512 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67416 00:15:56.770 killing process with pid 67416 00:15:56.770 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.770 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.770 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67416' 00:15:56.770 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67416 00:15:56.770 [2024-12-06 13:10:03.042137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.770 13:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67416 00:15:56.770 [2024-12-06 13:10:03.264762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.145 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gBbgy5eoOh 00:15:58.145 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:58.145 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:58.145 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:58.145 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:58.145 ************************************ 00:15:58.145 END TEST raid_read_error_test 00:15:58.146 ************************************ 00:15:58.146 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.146 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:58.146 13:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:58.146 00:15:58.146 real 0m4.822s 00:15:58.146 user 0m5.921s 00:15:58.146 sys 0m0.655s 00:15:58.146 13:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.146 13:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.146 13:10:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:15:58.146 13:10:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:58.146 13:10:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.146 13:10:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.146 ************************************ 00:15:58.146 START TEST raid_write_error_test 00:15:58.146 ************************************ 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0LaAtiT2KF 00:15:58.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67562 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67562 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67562 ']' 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.146 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.146 [2024-12-06 13:10:04.587385] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:15:58.146 [2024-12-06 13:10:04.588451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67562 ] 00:15:58.404 [2024-12-06 13:10:04.766641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.404 [2024-12-06 13:10:04.914217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.662 [2024-12-06 13:10:05.111576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.662 [2024-12-06 13:10:05.111660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.228 BaseBdev1_malloc 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.228 true 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.228 [2024-12-06 13:10:05.738173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:59.228 [2024-12-06 13:10:05.738246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.228 [2024-12-06 13:10:05.738283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:59.228 [2024-12-06 13:10:05.738319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.228 [2024-12-06 13:10:05.741222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.228 [2024-12-06 13:10:05.741270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.228 BaseBdev1 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.228 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 BaseBdev2_malloc 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 true 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 [2024-12-06 13:10:05.803155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.487 [2024-12-06 13:10:05.803380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.487 [2024-12-06 13:10:05.803417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:59.487 [2024-12-06 13:10:05.803436] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.487 [2024-12-06 13:10:05.806577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.487 [2024-12-06 13:10:05.806627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.487 BaseBdev2 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 BaseBdev3_malloc 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 true 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 [2024-12-06 13:10:05.885803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:59.487 [2024-12-06 13:10:05.885881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.487 [2024-12-06 13:10:05.885909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.487 [2024-12-06 13:10:05.885942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.487 [2024-12-06 13:10:05.889100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.487 [2024-12-06 13:10:05.889150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.487 BaseBdev3 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 [2024-12-06 13:10:05.894047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.487 [2024-12-06 13:10:05.896823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.487 [2024-12-06 13:10:05.896936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.487 [2024-12-06 13:10:05.897208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.487 [2024-12-06 13:10:05.897227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.487 [2024-12-06 13:10:05.897598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:59.487 [2024-12-06 13:10:05.897820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.487 [2024-12-06 13:10:05.897858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:59.487 [2024-12-06 13:10:05.898139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.487 "name": "raid_bdev1", 00:15:59.487 "uuid": "a54b3043-f2cc-4103-8c51-954e9f73da49", 00:15:59.487 "strip_size_kb": 64, 00:15:59.487 "state": "online", 00:15:59.487 "raid_level": "concat", 00:15:59.487 "superblock": true, 00:15:59.487 "num_base_bdevs": 3, 00:15:59.487 "num_base_bdevs_discovered": 3, 00:15:59.487 "num_base_bdevs_operational": 3, 00:15:59.487 "base_bdevs_list": [ 00:15:59.487 { 00:15:59.487 "name": "BaseBdev1", 00:15:59.487 "uuid": "0d63f6bc-9dfa-595d-b0ef-676129586321", 00:15:59.487 "is_configured": true, 00:15:59.487 "data_offset": 2048, 00:15:59.487 "data_size": 63488 00:15:59.487 }, 00:15:59.487 { 00:15:59.487 "name": "BaseBdev2", 00:15:59.487 "uuid": "7b5ecdf5-7e82-5e93-9cd1-4ff27a882d2c", 00:15:59.487 "is_configured": true, 00:15:59.487 "data_offset": 2048, 00:15:59.487 "data_size": 63488 00:15:59.487 }, 00:15:59.487 { 00:15:59.487 "name": "BaseBdev3", 00:15:59.487 "uuid": "a04a36f5-86e8-50df-9e4d-1bc9058b14bd", 00:15:59.487 "is_configured": true, 00:15:59.487 "data_offset": 2048, 00:15:59.487 "data_size": 63488 00:15:59.487 } 00:15:59.487 ] 00:15:59.487 }' 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.487 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.113 13:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:00.113 13:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:00.113 [2024-12-06 13:10:06.467863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.057 "name": "raid_bdev1", 00:16:01.057 "uuid": "a54b3043-f2cc-4103-8c51-954e9f73da49", 00:16:01.057 "strip_size_kb": 64, 00:16:01.057 "state": "online", 00:16:01.057 "raid_level": "concat", 00:16:01.057 "superblock": true, 00:16:01.057 "num_base_bdevs": 3, 00:16:01.057 "num_base_bdevs_discovered": 3, 00:16:01.057 "num_base_bdevs_operational": 3, 00:16:01.057 "base_bdevs_list": [ 00:16:01.057 { 00:16:01.057 "name": "BaseBdev1", 00:16:01.057 "uuid": "0d63f6bc-9dfa-595d-b0ef-676129586321", 00:16:01.057 "is_configured": true, 00:16:01.057 "data_offset": 2048, 00:16:01.057 "data_size": 63488 00:16:01.057 }, 00:16:01.057 { 00:16:01.057 "name": "BaseBdev2", 00:16:01.057 "uuid": "7b5ecdf5-7e82-5e93-9cd1-4ff27a882d2c", 00:16:01.057 "is_configured": true, 00:16:01.057 "data_offset": 2048, 00:16:01.057 "data_size": 63488 00:16:01.057 }, 00:16:01.057 { 00:16:01.057 "name": "BaseBdev3", 00:16:01.057 "uuid": "a04a36f5-86e8-50df-9e4d-1bc9058b14bd", 00:16:01.057 "is_configured": true, 00:16:01.057 "data_offset": 2048, 00:16:01.057 "data_size": 63488 00:16:01.057 } 00:16:01.057 ] 00:16:01.057 }' 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.057 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.621 [2024-12-06 13:10:07.886509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.621 [2024-12-06 13:10:07.886687] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.621 [2024-12-06 13:10:07.890326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.621 [2024-12-06 13:10:07.890588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.621 [2024-12-06 13:10:07.890776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:16:01.621 "results": [ 00:16:01.621 { 00:16:01.621 "job": "raid_bdev1", 00:16:01.621 "core_mask": "0x1", 00:16:01.621 "workload": "randrw", 00:16:01.621 "percentage": 50, 00:16:01.621 "status": "finished", 00:16:01.621 "queue_depth": 1, 00:16:01.621 "io_size": 131072, 00:16:01.621 "runtime": 1.416329, 00:16:01.621 "iops": 9836.697546968253, 00:16:01.621 "mibps": 1229.5871933710316, 00:16:01.621 "io_failed": 1, 00:16:01.621 "io_timeout": 0, 00:16:01.621 "avg_latency_us": 142.85978585829588, 00:16:01.621 "min_latency_us": 39.09818181818182, 00:16:01.621 "max_latency_us": 1832.0290909090909 00:16:01.621 } 00:16:01.621 ], 00:16:01.621 "core_count": 1 00:16:01.621 } 00:16:01.621 ee all in destruct 00:16:01.621 [2024-12-06 13:10:07.890909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67562 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67562 ']' 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67562 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67562 00:16:01.621 killing process with pid 67562 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67562' 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67562 00:16:01.621 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67562 00:16:01.621 [2024-12-06 13:10:07.928029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.878 [2024-12-06 13:10:08.151509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0LaAtiT2KF 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:03.254 ************************************ 00:16:03.254 END TEST raid_write_error_test 00:16:03.254 ************************************ 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:03.254 00:16:03.254 real 0m4.877s 00:16:03.254 user 0m5.983s 00:16:03.254 sys 0m0.627s 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.254 13:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.254 13:10:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:03.254 13:10:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:03.254 13:10:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:03.254 13:10:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.254 13:10:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.254 ************************************ 00:16:03.254 START TEST raid_state_function_test 00:16:03.254 ************************************ 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.254 Process raid pid: 67706 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:03.254 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67706 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67706' 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67706 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67706 ']' 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.255 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.255 [2024-12-06 13:10:09.518633] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:03.255 [2024-12-06 13:10:09.519037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.255 [2024-12-06 13:10:09.703775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.518 [2024-12-06 13:10:09.883478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.777 [2024-12-06 13:10:10.121535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.777 [2024-12-06 13:10:10.121892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.036 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.036 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:04.036 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:04.036 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.036 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.036 [2024-12-06 13:10:10.558202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.036 [2024-12-06 13:10:10.558592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.036 [2024-12-06 13:10:10.558765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.036 [2024-12-06 13:10:10.558811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.036 [2024-12-06 13:10:10.558832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.036 [2024-12-06 13:10:10.558859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.295 "name": "Existed_Raid", 00:16:04.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.295 "strip_size_kb": 0, 00:16:04.295 "state": "configuring", 00:16:04.295 "raid_level": "raid1", 00:16:04.295 "superblock": false, 00:16:04.295 "num_base_bdevs": 3, 00:16:04.295 "num_base_bdevs_discovered": 0, 00:16:04.295 "num_base_bdevs_operational": 3, 00:16:04.295 "base_bdevs_list": [ 00:16:04.295 { 00:16:04.295 "name": "BaseBdev1", 00:16:04.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.295 "is_configured": false, 00:16:04.295 "data_offset": 0, 00:16:04.295 "data_size": 0 00:16:04.295 }, 00:16:04.295 { 00:16:04.295 "name": "BaseBdev2", 00:16:04.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.295 "is_configured": false, 00:16:04.295 "data_offset": 0, 00:16:04.295 "data_size": 0 00:16:04.295 }, 00:16:04.295 { 00:16:04.295 "name": "BaseBdev3", 00:16:04.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.295 "is_configured": false, 00:16:04.295 "data_offset": 0, 00:16:04.295 "data_size": 0 00:16:04.295 } 00:16:04.295 ] 00:16:04.295 }' 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.295 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.554 [2024-12-06 13:10:11.062213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.554 [2024-12-06 13:10:11.062433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.554 [2024-12-06 13:10:11.070155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.554 [2024-12-06 13:10:11.070220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.554 [2024-12-06 13:10:11.070237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.554 [2024-12-06 13:10:11.070253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.554 [2024-12-06 13:10:11.070263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.554 [2024-12-06 13:10:11.070289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.554 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.813 [2024-12-06 13:10:11.119246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.813 BaseBdev1 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.813 [ 00:16:04.813 { 00:16:04.813 "name": "BaseBdev1", 00:16:04.813 "aliases": [ 00:16:04.813 "4921eaa3-b289-4aad-b7e5-3a78439ae88f" 00:16:04.813 ], 00:16:04.813 "product_name": "Malloc disk", 00:16:04.813 "block_size": 512, 00:16:04.813 "num_blocks": 65536, 00:16:04.813 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:04.813 "assigned_rate_limits": { 00:16:04.813 "rw_ios_per_sec": 0, 00:16:04.813 "rw_mbytes_per_sec": 0, 00:16:04.813 "r_mbytes_per_sec": 0, 00:16:04.813 "w_mbytes_per_sec": 0 00:16:04.813 }, 00:16:04.813 "claimed": true, 00:16:04.813 "claim_type": "exclusive_write", 00:16:04.813 "zoned": false, 00:16:04.813 "supported_io_types": { 00:16:04.813 "read": true, 00:16:04.813 "write": true, 00:16:04.813 "unmap": true, 00:16:04.813 "flush": true, 00:16:04.813 "reset": true, 00:16:04.813 "nvme_admin": false, 00:16:04.813 "nvme_io": false, 00:16:04.813 "nvme_io_md": false, 00:16:04.813 "write_zeroes": true, 00:16:04.813 "zcopy": true, 00:16:04.813 "get_zone_info": false, 00:16:04.813 "zone_management": false, 00:16:04.813 "zone_append": false, 00:16:04.813 "compare": false, 00:16:04.813 "compare_and_write": false, 00:16:04.813 "abort": true, 00:16:04.813 "seek_hole": false, 00:16:04.813 "seek_data": false, 00:16:04.813 "copy": true, 00:16:04.813 "nvme_iov_md": false 00:16:04.813 }, 00:16:04.813 "memory_domains": [ 00:16:04.813 { 00:16:04.813 "dma_device_id": "system", 00:16:04.813 "dma_device_type": 1 00:16:04.813 }, 00:16:04.813 { 00:16:04.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.813 "dma_device_type": 2 00:16:04.813 } 00:16:04.813 ], 00:16:04.813 "driver_specific": {} 00:16:04.813 } 00:16:04.813 ] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.813 "name": "Existed_Raid", 00:16:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.813 "strip_size_kb": 0, 00:16:04.813 "state": "configuring", 00:16:04.813 "raid_level": "raid1", 00:16:04.813 "superblock": false, 00:16:04.813 "num_base_bdevs": 3, 00:16:04.813 "num_base_bdevs_discovered": 1, 00:16:04.813 "num_base_bdevs_operational": 3, 00:16:04.813 "base_bdevs_list": [ 00:16:04.813 { 00:16:04.813 "name": "BaseBdev1", 00:16:04.813 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:04.813 "is_configured": true, 00:16:04.813 "data_offset": 0, 00:16:04.813 "data_size": 65536 00:16:04.813 }, 00:16:04.813 { 00:16:04.813 "name": "BaseBdev2", 00:16:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.813 "is_configured": false, 00:16:04.813 "data_offset": 0, 00:16:04.813 "data_size": 0 00:16:04.813 }, 00:16:04.813 { 00:16:04.813 "name": "BaseBdev3", 00:16:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.813 "is_configured": false, 00:16:04.813 "data_offset": 0, 00:16:04.813 "data_size": 0 00:16:04.813 } 00:16:04.813 ] 00:16:04.813 }' 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.813 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.380 [2024-12-06 13:10:11.699490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.380 [2024-12-06 13:10:11.699566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.380 [2024-12-06 13:10:11.707502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.380 [2024-12-06 13:10:11.710096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.380 [2024-12-06 13:10:11.710153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.380 [2024-12-06 13:10:11.710170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.380 [2024-12-06 13:10:11.710185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.380 "name": "Existed_Raid", 00:16:05.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.380 "strip_size_kb": 0, 00:16:05.380 "state": "configuring", 00:16:05.380 "raid_level": "raid1", 00:16:05.380 "superblock": false, 00:16:05.380 "num_base_bdevs": 3, 00:16:05.380 "num_base_bdevs_discovered": 1, 00:16:05.380 "num_base_bdevs_operational": 3, 00:16:05.380 "base_bdevs_list": [ 00:16:05.380 { 00:16:05.380 "name": "BaseBdev1", 00:16:05.380 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:05.380 "is_configured": true, 00:16:05.380 "data_offset": 0, 00:16:05.380 "data_size": 65536 00:16:05.380 }, 00:16:05.380 { 00:16:05.380 "name": "BaseBdev2", 00:16:05.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.380 "is_configured": false, 00:16:05.380 "data_offset": 0, 00:16:05.380 "data_size": 0 00:16:05.380 }, 00:16:05.380 { 00:16:05.380 "name": "BaseBdev3", 00:16:05.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.380 "is_configured": false, 00:16:05.380 "data_offset": 0, 00:16:05.380 "data_size": 0 00:16:05.380 } 00:16:05.380 ] 00:16:05.380 }' 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.380 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 [2024-12-06 13:10:12.301117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.947 BaseBdev2 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.947 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 [ 00:16:05.947 { 00:16:05.947 "name": "BaseBdev2", 00:16:05.947 "aliases": [ 00:16:05.947 "962f5189-6924-41d5-bbaf-5a9daec957b4" 00:16:05.947 ], 00:16:05.947 "product_name": "Malloc disk", 00:16:05.947 "block_size": 512, 00:16:05.947 "num_blocks": 65536, 00:16:05.947 "uuid": "962f5189-6924-41d5-bbaf-5a9daec957b4", 00:16:05.947 "assigned_rate_limits": { 00:16:05.947 "rw_ios_per_sec": 0, 00:16:05.947 "rw_mbytes_per_sec": 0, 00:16:05.947 "r_mbytes_per_sec": 0, 00:16:05.947 "w_mbytes_per_sec": 0 00:16:05.947 }, 00:16:05.947 "claimed": true, 00:16:05.947 "claim_type": "exclusive_write", 00:16:05.947 "zoned": false, 00:16:05.947 "supported_io_types": { 00:16:05.947 "read": true, 00:16:05.947 "write": true, 00:16:05.947 "unmap": true, 00:16:05.947 "flush": true, 00:16:05.947 "reset": true, 00:16:05.947 "nvme_admin": false, 00:16:05.947 "nvme_io": false, 00:16:05.947 "nvme_io_md": false, 00:16:05.947 "write_zeroes": true, 00:16:05.947 "zcopy": true, 00:16:05.947 "get_zone_info": false, 00:16:05.947 "zone_management": false, 00:16:05.947 "zone_append": false, 00:16:05.947 "compare": false, 00:16:05.947 "compare_and_write": false, 00:16:05.947 "abort": true, 00:16:05.947 "seek_hole": false, 00:16:05.947 "seek_data": false, 00:16:05.948 "copy": true, 00:16:05.948 "nvme_iov_md": false 00:16:05.948 }, 00:16:05.948 "memory_domains": [ 00:16:05.948 { 00:16:05.948 "dma_device_id": "system", 00:16:05.948 "dma_device_type": 1 00:16:05.948 }, 00:16:05.948 { 00:16:05.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.948 "dma_device_type": 2 00:16:05.948 } 00:16:05.948 ], 00:16:05.948 "driver_specific": {} 00:16:05.948 } 00:16:05.948 ] 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.948 "name": "Existed_Raid", 00:16:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.948 "strip_size_kb": 0, 00:16:05.948 "state": "configuring", 00:16:05.948 "raid_level": "raid1", 00:16:05.948 "superblock": false, 00:16:05.948 "num_base_bdevs": 3, 00:16:05.948 "num_base_bdevs_discovered": 2, 00:16:05.948 "num_base_bdevs_operational": 3, 00:16:05.948 "base_bdevs_list": [ 00:16:05.948 { 00:16:05.948 "name": "BaseBdev1", 00:16:05.948 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:05.948 "is_configured": true, 00:16:05.948 "data_offset": 0, 00:16:05.948 "data_size": 65536 00:16:05.948 }, 00:16:05.948 { 00:16:05.948 "name": "BaseBdev2", 00:16:05.948 "uuid": "962f5189-6924-41d5-bbaf-5a9daec957b4", 00:16:05.948 "is_configured": true, 00:16:05.948 "data_offset": 0, 00:16:05.948 "data_size": 65536 00:16:05.948 }, 00:16:05.948 { 00:16:05.948 "name": "BaseBdev3", 00:16:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.948 "is_configured": false, 00:16:05.948 "data_offset": 0, 00:16:05.948 "data_size": 0 00:16:05.948 } 00:16:05.948 ] 00:16:05.948 }' 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.948 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.516 [2024-12-06 13:10:12.908368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.516 [2024-12-06 13:10:12.908480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:06.516 [2024-12-06 13:10:12.908504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:06.516 [2024-12-06 13:10:12.908871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:06.516 [2024-12-06 13:10:12.909128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:06.516 [2024-12-06 13:10:12.909145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:06.516 [2024-12-06 13:10:12.909522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.516 BaseBdev3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.516 [ 00:16:06.516 { 00:16:06.516 "name": "BaseBdev3", 00:16:06.516 "aliases": [ 00:16:06.516 "0d008aaa-67a5-4816-8302-140f2a929e80" 00:16:06.516 ], 00:16:06.516 "product_name": "Malloc disk", 00:16:06.516 "block_size": 512, 00:16:06.516 "num_blocks": 65536, 00:16:06.516 "uuid": "0d008aaa-67a5-4816-8302-140f2a929e80", 00:16:06.516 "assigned_rate_limits": { 00:16:06.516 "rw_ios_per_sec": 0, 00:16:06.516 "rw_mbytes_per_sec": 0, 00:16:06.516 "r_mbytes_per_sec": 0, 00:16:06.516 "w_mbytes_per_sec": 0 00:16:06.516 }, 00:16:06.516 "claimed": true, 00:16:06.516 "claim_type": "exclusive_write", 00:16:06.516 "zoned": false, 00:16:06.516 "supported_io_types": { 00:16:06.516 "read": true, 00:16:06.516 "write": true, 00:16:06.516 "unmap": true, 00:16:06.516 "flush": true, 00:16:06.516 "reset": true, 00:16:06.516 "nvme_admin": false, 00:16:06.516 "nvme_io": false, 00:16:06.516 "nvme_io_md": false, 00:16:06.516 "write_zeroes": true, 00:16:06.516 "zcopy": true, 00:16:06.516 "get_zone_info": false, 00:16:06.516 "zone_management": false, 00:16:06.516 "zone_append": false, 00:16:06.516 "compare": false, 00:16:06.516 "compare_and_write": false, 00:16:06.516 "abort": true, 00:16:06.516 "seek_hole": false, 00:16:06.516 "seek_data": false, 00:16:06.516 "copy": true, 00:16:06.516 "nvme_iov_md": false 00:16:06.516 }, 00:16:06.516 "memory_domains": [ 00:16:06.516 { 00:16:06.516 "dma_device_id": "system", 00:16:06.516 "dma_device_type": 1 00:16:06.516 }, 00:16:06.516 { 00:16:06.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.516 "dma_device_type": 2 00:16:06.516 } 00:16:06.516 ], 00:16:06.516 "driver_specific": {} 00:16:06.516 } 00:16:06.516 ] 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.516 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.516 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.516 "name": "Existed_Raid", 00:16:06.516 "uuid": "ed4edbd7-785b-487a-9e12-98cbaa12c4ee", 00:16:06.516 "strip_size_kb": 0, 00:16:06.516 "state": "online", 00:16:06.516 "raid_level": "raid1", 00:16:06.516 "superblock": false, 00:16:06.516 "num_base_bdevs": 3, 00:16:06.516 "num_base_bdevs_discovered": 3, 00:16:06.516 "num_base_bdevs_operational": 3, 00:16:06.516 "base_bdevs_list": [ 00:16:06.516 { 00:16:06.516 "name": "BaseBdev1", 00:16:06.516 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:06.516 "is_configured": true, 00:16:06.516 "data_offset": 0, 00:16:06.516 "data_size": 65536 00:16:06.516 }, 00:16:06.516 { 00:16:06.516 "name": "BaseBdev2", 00:16:06.516 "uuid": "962f5189-6924-41d5-bbaf-5a9daec957b4", 00:16:06.516 "is_configured": true, 00:16:06.516 "data_offset": 0, 00:16:06.516 "data_size": 65536 00:16:06.516 }, 00:16:06.516 { 00:16:06.516 "name": "BaseBdev3", 00:16:06.516 "uuid": "0d008aaa-67a5-4816-8302-140f2a929e80", 00:16:06.516 "is_configured": true, 00:16:06.516 "data_offset": 0, 00:16:06.516 "data_size": 65536 00:16:06.516 } 00:16:06.516 ] 00:16:06.516 }' 00:16:06.516 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.516 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.084 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.084 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.085 [2024-12-06 13:10:13.453015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.085 "name": "Existed_Raid", 00:16:07.085 "aliases": [ 00:16:07.085 "ed4edbd7-785b-487a-9e12-98cbaa12c4ee" 00:16:07.085 ], 00:16:07.085 "product_name": "Raid Volume", 00:16:07.085 "block_size": 512, 00:16:07.085 "num_blocks": 65536, 00:16:07.085 "uuid": "ed4edbd7-785b-487a-9e12-98cbaa12c4ee", 00:16:07.085 "assigned_rate_limits": { 00:16:07.085 "rw_ios_per_sec": 0, 00:16:07.085 "rw_mbytes_per_sec": 0, 00:16:07.085 "r_mbytes_per_sec": 0, 00:16:07.085 "w_mbytes_per_sec": 0 00:16:07.085 }, 00:16:07.085 "claimed": false, 00:16:07.085 "zoned": false, 00:16:07.085 "supported_io_types": { 00:16:07.085 "read": true, 00:16:07.085 "write": true, 00:16:07.085 "unmap": false, 00:16:07.085 "flush": false, 00:16:07.085 "reset": true, 00:16:07.085 "nvme_admin": false, 00:16:07.085 "nvme_io": false, 00:16:07.085 "nvme_io_md": false, 00:16:07.085 "write_zeroes": true, 00:16:07.085 "zcopy": false, 00:16:07.085 "get_zone_info": false, 00:16:07.085 "zone_management": false, 00:16:07.085 "zone_append": false, 00:16:07.085 "compare": false, 00:16:07.085 "compare_and_write": false, 00:16:07.085 "abort": false, 00:16:07.085 "seek_hole": false, 00:16:07.085 "seek_data": false, 00:16:07.085 "copy": false, 00:16:07.085 "nvme_iov_md": false 00:16:07.085 }, 00:16:07.085 "memory_domains": [ 00:16:07.085 { 00:16:07.085 "dma_device_id": "system", 00:16:07.085 "dma_device_type": 1 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.085 "dma_device_type": 2 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "dma_device_id": "system", 00:16:07.085 "dma_device_type": 1 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.085 "dma_device_type": 2 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "dma_device_id": "system", 00:16:07.085 "dma_device_type": 1 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.085 "dma_device_type": 2 00:16:07.085 } 00:16:07.085 ], 00:16:07.085 "driver_specific": { 00:16:07.085 "raid": { 00:16:07.085 "uuid": "ed4edbd7-785b-487a-9e12-98cbaa12c4ee", 00:16:07.085 "strip_size_kb": 0, 00:16:07.085 "state": "online", 00:16:07.085 "raid_level": "raid1", 00:16:07.085 "superblock": false, 00:16:07.085 "num_base_bdevs": 3, 00:16:07.085 "num_base_bdevs_discovered": 3, 00:16:07.085 "num_base_bdevs_operational": 3, 00:16:07.085 "base_bdevs_list": [ 00:16:07.085 { 00:16:07.085 "name": "BaseBdev1", 00:16:07.085 "uuid": "4921eaa3-b289-4aad-b7e5-3a78439ae88f", 00:16:07.085 "is_configured": true, 00:16:07.085 "data_offset": 0, 00:16:07.085 "data_size": 65536 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "name": "BaseBdev2", 00:16:07.085 "uuid": "962f5189-6924-41d5-bbaf-5a9daec957b4", 00:16:07.085 "is_configured": true, 00:16:07.085 "data_offset": 0, 00:16:07.085 "data_size": 65536 00:16:07.085 }, 00:16:07.085 { 00:16:07.085 "name": "BaseBdev3", 00:16:07.085 "uuid": "0d008aaa-67a5-4816-8302-140f2a929e80", 00:16:07.085 "is_configured": true, 00:16:07.085 "data_offset": 0, 00:16:07.085 "data_size": 65536 00:16:07.085 } 00:16:07.085 ] 00:16:07.085 } 00:16:07.085 } 00:16:07.085 }' 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:07.085 BaseBdev2 00:16:07.085 BaseBdev3' 00:16:07.085 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.344 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 [2024-12-06 13:10:13.784755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.612 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.613 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.613 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.613 "name": "Existed_Raid", 00:16:07.613 "uuid": "ed4edbd7-785b-487a-9e12-98cbaa12c4ee", 00:16:07.613 "strip_size_kb": 0, 00:16:07.613 "state": "online", 00:16:07.613 "raid_level": "raid1", 00:16:07.613 "superblock": false, 00:16:07.613 "num_base_bdevs": 3, 00:16:07.613 "num_base_bdevs_discovered": 2, 00:16:07.613 "num_base_bdevs_operational": 2, 00:16:07.613 "base_bdevs_list": [ 00:16:07.613 { 00:16:07.613 "name": null, 00:16:07.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.613 "is_configured": false, 00:16:07.613 "data_offset": 0, 00:16:07.613 "data_size": 65536 00:16:07.613 }, 00:16:07.613 { 00:16:07.613 "name": "BaseBdev2", 00:16:07.613 "uuid": "962f5189-6924-41d5-bbaf-5a9daec957b4", 00:16:07.613 "is_configured": true, 00:16:07.613 "data_offset": 0, 00:16:07.613 "data_size": 65536 00:16:07.613 }, 00:16:07.613 { 00:16:07.613 "name": "BaseBdev3", 00:16:07.613 "uuid": "0d008aaa-67a5-4816-8302-140f2a929e80", 00:16:07.613 "is_configured": true, 00:16:07.613 "data_offset": 0, 00:16:07.613 "data_size": 65536 00:16:07.613 } 00:16:07.613 ] 00:16:07.613 }' 00:16:07.613 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.613 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.872 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:07.872 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.130 [2024-12-06 13:10:14.452244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.130 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.130 [2024-12-06 13:10:14.603076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:08.130 [2024-12-06 13:10:14.603222] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.390 [2024-12-06 13:10:14.695325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.390 [2024-12-06 13:10:14.695396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.390 [2024-12-06 13:10:14.695419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.390 BaseBdev2 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.390 [ 00:16:08.390 { 00:16:08.390 "name": "BaseBdev2", 00:16:08.390 "aliases": [ 00:16:08.390 "77afe45f-655b-4c1c-8c1e-baf2e40ccb09" 00:16:08.390 ], 00:16:08.390 "product_name": "Malloc disk", 00:16:08.390 "block_size": 512, 00:16:08.390 "num_blocks": 65536, 00:16:08.390 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:08.390 "assigned_rate_limits": { 00:16:08.390 "rw_ios_per_sec": 0, 00:16:08.390 "rw_mbytes_per_sec": 0, 00:16:08.390 "r_mbytes_per_sec": 0, 00:16:08.390 "w_mbytes_per_sec": 0 00:16:08.390 }, 00:16:08.390 "claimed": false, 00:16:08.390 "zoned": false, 00:16:08.390 "supported_io_types": { 00:16:08.390 "read": true, 00:16:08.390 "write": true, 00:16:08.390 "unmap": true, 00:16:08.390 "flush": true, 00:16:08.390 "reset": true, 00:16:08.390 "nvme_admin": false, 00:16:08.390 "nvme_io": false, 00:16:08.390 "nvme_io_md": false, 00:16:08.390 "write_zeroes": true, 00:16:08.390 "zcopy": true, 00:16:08.390 "get_zone_info": false, 00:16:08.390 "zone_management": false, 00:16:08.390 "zone_append": false, 00:16:08.390 "compare": false, 00:16:08.390 "compare_and_write": false, 00:16:08.390 "abort": true, 00:16:08.390 "seek_hole": false, 00:16:08.390 "seek_data": false, 00:16:08.390 "copy": true, 00:16:08.390 "nvme_iov_md": false 00:16:08.390 }, 00:16:08.390 "memory_domains": [ 00:16:08.390 { 00:16:08.390 "dma_device_id": "system", 00:16:08.390 "dma_device_type": 1 00:16:08.390 }, 00:16:08.390 { 00:16:08.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.390 "dma_device_type": 2 00:16:08.390 } 00:16:08.390 ], 00:16:08.390 "driver_specific": {} 00:16:08.390 } 00:16:08.390 ] 00:16:08.390 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.391 BaseBdev3 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.391 [ 00:16:08.391 { 00:16:08.391 "name": "BaseBdev3", 00:16:08.391 "aliases": [ 00:16:08.391 "73202657-e6a3-4218-b11c-4728c49f0bc8" 00:16:08.391 ], 00:16:08.391 "product_name": "Malloc disk", 00:16:08.391 "block_size": 512, 00:16:08.391 "num_blocks": 65536, 00:16:08.391 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:08.391 "assigned_rate_limits": { 00:16:08.391 "rw_ios_per_sec": 0, 00:16:08.391 "rw_mbytes_per_sec": 0, 00:16:08.391 "r_mbytes_per_sec": 0, 00:16:08.391 "w_mbytes_per_sec": 0 00:16:08.391 }, 00:16:08.391 "claimed": false, 00:16:08.391 "zoned": false, 00:16:08.391 "supported_io_types": { 00:16:08.391 "read": true, 00:16:08.391 "write": true, 00:16:08.391 "unmap": true, 00:16:08.391 "flush": true, 00:16:08.391 "reset": true, 00:16:08.391 "nvme_admin": false, 00:16:08.391 "nvme_io": false, 00:16:08.391 "nvme_io_md": false, 00:16:08.391 "write_zeroes": true, 00:16:08.391 "zcopy": true, 00:16:08.391 "get_zone_info": false, 00:16:08.391 "zone_management": false, 00:16:08.391 "zone_append": false, 00:16:08.391 "compare": false, 00:16:08.391 "compare_and_write": false, 00:16:08.391 "abort": true, 00:16:08.391 "seek_hole": false, 00:16:08.391 "seek_data": false, 00:16:08.391 "copy": true, 00:16:08.391 "nvme_iov_md": false 00:16:08.391 }, 00:16:08.391 "memory_domains": [ 00:16:08.391 { 00:16:08.391 "dma_device_id": "system", 00:16:08.391 "dma_device_type": 1 00:16:08.391 }, 00:16:08.391 { 00:16:08.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.391 "dma_device_type": 2 00:16:08.391 } 00:16:08.391 ], 00:16:08.391 "driver_specific": {} 00:16:08.391 } 00:16:08.391 ] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.391 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.650 [2024-12-06 13:10:14.917386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.650 [2024-12-06 13:10:14.917468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.650 [2024-12-06 13:10:14.917501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.650 [2024-12-06 13:10:14.920176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.650 "name": "Existed_Raid", 00:16:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.650 "strip_size_kb": 0, 00:16:08.650 "state": "configuring", 00:16:08.650 "raid_level": "raid1", 00:16:08.650 "superblock": false, 00:16:08.650 "num_base_bdevs": 3, 00:16:08.650 "num_base_bdevs_discovered": 2, 00:16:08.650 "num_base_bdevs_operational": 3, 00:16:08.650 "base_bdevs_list": [ 00:16:08.650 { 00:16:08.650 "name": "BaseBdev1", 00:16:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.650 "is_configured": false, 00:16:08.650 "data_offset": 0, 00:16:08.650 "data_size": 0 00:16:08.650 }, 00:16:08.650 { 00:16:08.650 "name": "BaseBdev2", 00:16:08.650 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:08.650 "is_configured": true, 00:16:08.650 "data_offset": 0, 00:16:08.650 "data_size": 65536 00:16:08.650 }, 00:16:08.650 { 00:16:08.650 "name": "BaseBdev3", 00:16:08.650 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:08.650 "is_configured": true, 00:16:08.650 "data_offset": 0, 00:16:08.650 "data_size": 65536 00:16:08.650 } 00:16:08.650 ] 00:16:08.650 }' 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.650 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.217 [2024-12-06 13:10:15.449564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.217 "name": "Existed_Raid", 00:16:09.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.217 "strip_size_kb": 0, 00:16:09.217 "state": "configuring", 00:16:09.217 "raid_level": "raid1", 00:16:09.217 "superblock": false, 00:16:09.217 "num_base_bdevs": 3, 00:16:09.217 "num_base_bdevs_discovered": 1, 00:16:09.217 "num_base_bdevs_operational": 3, 00:16:09.217 "base_bdevs_list": [ 00:16:09.217 { 00:16:09.217 "name": "BaseBdev1", 00:16:09.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.217 "is_configured": false, 00:16:09.217 "data_offset": 0, 00:16:09.217 "data_size": 0 00:16:09.217 }, 00:16:09.217 { 00:16:09.217 "name": null, 00:16:09.217 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:09.217 "is_configured": false, 00:16:09.217 "data_offset": 0, 00:16:09.217 "data_size": 65536 00:16:09.217 }, 00:16:09.217 { 00:16:09.217 "name": "BaseBdev3", 00:16:09.217 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:09.217 "is_configured": true, 00:16:09.217 "data_offset": 0, 00:16:09.217 "data_size": 65536 00:16:09.217 } 00:16:09.217 ] 00:16:09.217 }' 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.217 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.476 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.476 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.476 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.476 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.476 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 [2024-12-06 13:10:16.046782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.735 BaseBdev1 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 [ 00:16:09.735 { 00:16:09.735 "name": "BaseBdev1", 00:16:09.735 "aliases": [ 00:16:09.735 "f4be7ce1-8a19-444c-8ce7-85197e5487e6" 00:16:09.735 ], 00:16:09.735 "product_name": "Malloc disk", 00:16:09.735 "block_size": 512, 00:16:09.735 "num_blocks": 65536, 00:16:09.735 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:09.735 "assigned_rate_limits": { 00:16:09.735 "rw_ios_per_sec": 0, 00:16:09.735 "rw_mbytes_per_sec": 0, 00:16:09.735 "r_mbytes_per_sec": 0, 00:16:09.735 "w_mbytes_per_sec": 0 00:16:09.735 }, 00:16:09.735 "claimed": true, 00:16:09.735 "claim_type": "exclusive_write", 00:16:09.735 "zoned": false, 00:16:09.735 "supported_io_types": { 00:16:09.735 "read": true, 00:16:09.735 "write": true, 00:16:09.735 "unmap": true, 00:16:09.735 "flush": true, 00:16:09.735 "reset": true, 00:16:09.735 "nvme_admin": false, 00:16:09.735 "nvme_io": false, 00:16:09.735 "nvme_io_md": false, 00:16:09.735 "write_zeroes": true, 00:16:09.735 "zcopy": true, 00:16:09.735 "get_zone_info": false, 00:16:09.735 "zone_management": false, 00:16:09.735 "zone_append": false, 00:16:09.735 "compare": false, 00:16:09.735 "compare_and_write": false, 00:16:09.735 "abort": true, 00:16:09.735 "seek_hole": false, 00:16:09.735 "seek_data": false, 00:16:09.735 "copy": true, 00:16:09.735 "nvme_iov_md": false 00:16:09.735 }, 00:16:09.735 "memory_domains": [ 00:16:09.735 { 00:16:09.735 "dma_device_id": "system", 00:16:09.735 "dma_device_type": 1 00:16:09.735 }, 00:16:09.735 { 00:16:09.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.735 "dma_device_type": 2 00:16:09.735 } 00:16:09.735 ], 00:16:09.735 "driver_specific": {} 00:16:09.735 } 00:16:09.735 ] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.735 "name": "Existed_Raid", 00:16:09.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.735 "strip_size_kb": 0, 00:16:09.735 "state": "configuring", 00:16:09.735 "raid_level": "raid1", 00:16:09.735 "superblock": false, 00:16:09.735 "num_base_bdevs": 3, 00:16:09.735 "num_base_bdevs_discovered": 2, 00:16:09.735 "num_base_bdevs_operational": 3, 00:16:09.735 "base_bdevs_list": [ 00:16:09.735 { 00:16:09.735 "name": "BaseBdev1", 00:16:09.735 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:09.735 "is_configured": true, 00:16:09.735 "data_offset": 0, 00:16:09.735 "data_size": 65536 00:16:09.735 }, 00:16:09.735 { 00:16:09.735 "name": null, 00:16:09.735 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:09.735 "is_configured": false, 00:16:09.735 "data_offset": 0, 00:16:09.735 "data_size": 65536 00:16:09.735 }, 00:16:09.735 { 00:16:09.735 "name": "BaseBdev3", 00:16:09.735 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:09.735 "is_configured": true, 00:16:09.735 "data_offset": 0, 00:16:09.735 "data_size": 65536 00:16:09.735 } 00:16:09.735 ] 00:16:09.735 }' 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.735 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.321 [2024-12-06 13:10:16.675010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.321 "name": "Existed_Raid", 00:16:10.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.321 "strip_size_kb": 0, 00:16:10.321 "state": "configuring", 00:16:10.321 "raid_level": "raid1", 00:16:10.321 "superblock": false, 00:16:10.321 "num_base_bdevs": 3, 00:16:10.321 "num_base_bdevs_discovered": 1, 00:16:10.321 "num_base_bdevs_operational": 3, 00:16:10.321 "base_bdevs_list": [ 00:16:10.321 { 00:16:10.321 "name": "BaseBdev1", 00:16:10.321 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:10.321 "is_configured": true, 00:16:10.321 "data_offset": 0, 00:16:10.321 "data_size": 65536 00:16:10.321 }, 00:16:10.321 { 00:16:10.321 "name": null, 00:16:10.321 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:10.321 "is_configured": false, 00:16:10.321 "data_offset": 0, 00:16:10.321 "data_size": 65536 00:16:10.321 }, 00:16:10.321 { 00:16:10.321 "name": null, 00:16:10.321 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:10.321 "is_configured": false, 00:16:10.321 "data_offset": 0, 00:16:10.321 "data_size": 65536 00:16:10.321 } 00:16:10.321 ] 00:16:10.321 }' 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.321 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 [2024-12-06 13:10:17.251229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.888 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.888 "name": "Existed_Raid", 00:16:10.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.888 "strip_size_kb": 0, 00:16:10.889 "state": "configuring", 00:16:10.889 "raid_level": "raid1", 00:16:10.889 "superblock": false, 00:16:10.889 "num_base_bdevs": 3, 00:16:10.889 "num_base_bdevs_discovered": 2, 00:16:10.889 "num_base_bdevs_operational": 3, 00:16:10.889 "base_bdevs_list": [ 00:16:10.889 { 00:16:10.889 "name": "BaseBdev1", 00:16:10.889 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:10.889 "is_configured": true, 00:16:10.889 "data_offset": 0, 00:16:10.889 "data_size": 65536 00:16:10.889 }, 00:16:10.889 { 00:16:10.889 "name": null, 00:16:10.889 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:10.889 "is_configured": false, 00:16:10.889 "data_offset": 0, 00:16:10.889 "data_size": 65536 00:16:10.889 }, 00:16:10.889 { 00:16:10.889 "name": "BaseBdev3", 00:16:10.889 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:10.889 "is_configured": true, 00:16:10.889 "data_offset": 0, 00:16:10.889 "data_size": 65536 00:16:10.889 } 00:16:10.889 ] 00:16:10.889 }' 00:16:10.889 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.889 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.455 [2024-12-06 13:10:17.863392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.455 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.713 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.713 "name": "Existed_Raid", 00:16:11.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.713 "strip_size_kb": 0, 00:16:11.713 "state": "configuring", 00:16:11.714 "raid_level": "raid1", 00:16:11.714 "superblock": false, 00:16:11.714 "num_base_bdevs": 3, 00:16:11.714 "num_base_bdevs_discovered": 1, 00:16:11.714 "num_base_bdevs_operational": 3, 00:16:11.714 "base_bdevs_list": [ 00:16:11.714 { 00:16:11.714 "name": null, 00:16:11.714 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:11.714 "is_configured": false, 00:16:11.714 "data_offset": 0, 00:16:11.714 "data_size": 65536 00:16:11.714 }, 00:16:11.714 { 00:16:11.714 "name": null, 00:16:11.714 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:11.714 "is_configured": false, 00:16:11.714 "data_offset": 0, 00:16:11.714 "data_size": 65536 00:16:11.714 }, 00:16:11.714 { 00:16:11.714 "name": "BaseBdev3", 00:16:11.714 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:11.714 "is_configured": true, 00:16:11.714 "data_offset": 0, 00:16:11.714 "data_size": 65536 00:16:11.714 } 00:16:11.714 ] 00:16:11.714 }' 00:16:11.714 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.714 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.973 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.973 [2024-12-06 13:10:18.497806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.232 "name": "Existed_Raid", 00:16:12.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.232 "strip_size_kb": 0, 00:16:12.232 "state": "configuring", 00:16:12.232 "raid_level": "raid1", 00:16:12.232 "superblock": false, 00:16:12.232 "num_base_bdevs": 3, 00:16:12.232 "num_base_bdevs_discovered": 2, 00:16:12.232 "num_base_bdevs_operational": 3, 00:16:12.232 "base_bdevs_list": [ 00:16:12.232 { 00:16:12.232 "name": null, 00:16:12.232 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:12.232 "is_configured": false, 00:16:12.232 "data_offset": 0, 00:16:12.232 "data_size": 65536 00:16:12.232 }, 00:16:12.232 { 00:16:12.232 "name": "BaseBdev2", 00:16:12.232 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:12.232 "is_configured": true, 00:16:12.232 "data_offset": 0, 00:16:12.232 "data_size": 65536 00:16:12.232 }, 00:16:12.232 { 00:16:12.232 "name": "BaseBdev3", 00:16:12.232 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:12.232 "is_configured": true, 00:16:12.232 "data_offset": 0, 00:16:12.232 "data_size": 65536 00:16:12.232 } 00:16:12.232 ] 00:16:12.232 }' 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.232 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.491 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.491 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:12.491 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.491 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4be7ce1-8a19-444c-8ce7-85197e5487e6 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.791 [2024-12-06 13:10:19.144303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:12.791 [2024-12-06 13:10:19.144366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:12.791 [2024-12-06 13:10:19.144377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:12.791 [2024-12-06 13:10:19.144769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:12.791 [2024-12-06 13:10:19.145010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:12.791 [2024-12-06 13:10:19.145037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:12.791 [2024-12-06 13:10:19.145372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.791 NewBaseBdev 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.791 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.791 [ 00:16:12.791 { 00:16:12.791 "name": "NewBaseBdev", 00:16:12.791 "aliases": [ 00:16:12.791 "f4be7ce1-8a19-444c-8ce7-85197e5487e6" 00:16:12.791 ], 00:16:12.791 "product_name": "Malloc disk", 00:16:12.791 "block_size": 512, 00:16:12.791 "num_blocks": 65536, 00:16:12.791 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:12.791 "assigned_rate_limits": { 00:16:12.791 "rw_ios_per_sec": 0, 00:16:12.791 "rw_mbytes_per_sec": 0, 00:16:12.791 "r_mbytes_per_sec": 0, 00:16:12.791 "w_mbytes_per_sec": 0 00:16:12.791 }, 00:16:12.791 "claimed": true, 00:16:12.791 "claim_type": "exclusive_write", 00:16:12.791 "zoned": false, 00:16:12.791 "supported_io_types": { 00:16:12.791 "read": true, 00:16:12.791 "write": true, 00:16:12.791 "unmap": true, 00:16:12.791 "flush": true, 00:16:12.791 "reset": true, 00:16:12.791 "nvme_admin": false, 00:16:12.791 "nvme_io": false, 00:16:12.791 "nvme_io_md": false, 00:16:12.791 "write_zeroes": true, 00:16:12.791 "zcopy": true, 00:16:12.791 "get_zone_info": false, 00:16:12.791 "zone_management": false, 00:16:12.791 "zone_append": false, 00:16:12.791 "compare": false, 00:16:12.792 "compare_and_write": false, 00:16:12.792 "abort": true, 00:16:12.792 "seek_hole": false, 00:16:12.792 "seek_data": false, 00:16:12.792 "copy": true, 00:16:12.792 "nvme_iov_md": false 00:16:12.792 }, 00:16:12.792 "memory_domains": [ 00:16:12.792 { 00:16:12.792 "dma_device_id": "system", 00:16:12.792 "dma_device_type": 1 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.792 "dma_device_type": 2 00:16:12.792 } 00:16:12.792 ], 00:16:12.792 "driver_specific": {} 00:16:12.792 } 00:16:12.792 ] 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.792 "name": "Existed_Raid", 00:16:12.792 "uuid": "3b45723b-3441-4c9e-be7b-a5f6d1bafe98", 00:16:12.792 "strip_size_kb": 0, 00:16:12.792 "state": "online", 00:16:12.792 "raid_level": "raid1", 00:16:12.792 "superblock": false, 00:16:12.792 "num_base_bdevs": 3, 00:16:12.792 "num_base_bdevs_discovered": 3, 00:16:12.792 "num_base_bdevs_operational": 3, 00:16:12.792 "base_bdevs_list": [ 00:16:12.792 { 00:16:12.792 "name": "NewBaseBdev", 00:16:12.792 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "name": "BaseBdev2", 00:16:12.792 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 }, 00:16:12.792 { 00:16:12.792 "name": "BaseBdev3", 00:16:12.792 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:12.792 "is_configured": true, 00:16:12.792 "data_offset": 0, 00:16:12.792 "data_size": 65536 00:16:12.792 } 00:16:12.792 ] 00:16:12.792 }' 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.792 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.359 [2024-12-06 13:10:19.721021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.359 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.359 "name": "Existed_Raid", 00:16:13.359 "aliases": [ 00:16:13.359 "3b45723b-3441-4c9e-be7b-a5f6d1bafe98" 00:16:13.359 ], 00:16:13.359 "product_name": "Raid Volume", 00:16:13.359 "block_size": 512, 00:16:13.359 "num_blocks": 65536, 00:16:13.359 "uuid": "3b45723b-3441-4c9e-be7b-a5f6d1bafe98", 00:16:13.359 "assigned_rate_limits": { 00:16:13.359 "rw_ios_per_sec": 0, 00:16:13.359 "rw_mbytes_per_sec": 0, 00:16:13.359 "r_mbytes_per_sec": 0, 00:16:13.359 "w_mbytes_per_sec": 0 00:16:13.359 }, 00:16:13.359 "claimed": false, 00:16:13.359 "zoned": false, 00:16:13.359 "supported_io_types": { 00:16:13.359 "read": true, 00:16:13.359 "write": true, 00:16:13.359 "unmap": false, 00:16:13.359 "flush": false, 00:16:13.359 "reset": true, 00:16:13.359 "nvme_admin": false, 00:16:13.359 "nvme_io": false, 00:16:13.359 "nvme_io_md": false, 00:16:13.359 "write_zeroes": true, 00:16:13.359 "zcopy": false, 00:16:13.359 "get_zone_info": false, 00:16:13.359 "zone_management": false, 00:16:13.359 "zone_append": false, 00:16:13.359 "compare": false, 00:16:13.359 "compare_and_write": false, 00:16:13.359 "abort": false, 00:16:13.359 "seek_hole": false, 00:16:13.359 "seek_data": false, 00:16:13.359 "copy": false, 00:16:13.359 "nvme_iov_md": false 00:16:13.359 }, 00:16:13.359 "memory_domains": [ 00:16:13.359 { 00:16:13.359 "dma_device_id": "system", 00:16:13.359 "dma_device_type": 1 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.359 "dma_device_type": 2 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "dma_device_id": "system", 00:16:13.359 "dma_device_type": 1 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.359 "dma_device_type": 2 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "dma_device_id": "system", 00:16:13.359 "dma_device_type": 1 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.359 "dma_device_type": 2 00:16:13.359 } 00:16:13.359 ], 00:16:13.359 "driver_specific": { 00:16:13.359 "raid": { 00:16:13.359 "uuid": "3b45723b-3441-4c9e-be7b-a5f6d1bafe98", 00:16:13.359 "strip_size_kb": 0, 00:16:13.359 "state": "online", 00:16:13.359 "raid_level": "raid1", 00:16:13.359 "superblock": false, 00:16:13.359 "num_base_bdevs": 3, 00:16:13.359 "num_base_bdevs_discovered": 3, 00:16:13.359 "num_base_bdevs_operational": 3, 00:16:13.359 "base_bdevs_list": [ 00:16:13.359 { 00:16:13.359 "name": "NewBaseBdev", 00:16:13.359 "uuid": "f4be7ce1-8a19-444c-8ce7-85197e5487e6", 00:16:13.359 "is_configured": true, 00:16:13.359 "data_offset": 0, 00:16:13.359 "data_size": 65536 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "name": "BaseBdev2", 00:16:13.359 "uuid": "77afe45f-655b-4c1c-8c1e-baf2e40ccb09", 00:16:13.359 "is_configured": true, 00:16:13.359 "data_offset": 0, 00:16:13.359 "data_size": 65536 00:16:13.359 }, 00:16:13.359 { 00:16:13.359 "name": "BaseBdev3", 00:16:13.359 "uuid": "73202657-e6a3-4218-b11c-4728c49f0bc8", 00:16:13.359 "is_configured": true, 00:16:13.359 "data_offset": 0, 00:16:13.359 "data_size": 65536 00:16:13.359 } 00:16:13.359 ] 00:16:13.359 } 00:16:13.359 } 00:16:13.359 }' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:13.360 BaseBdev2 00:16:13.360 BaseBdev3' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.360 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 [2024-12-06 13:10:20.032682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.618 [2024-12-06 13:10:20.032725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.618 [2024-12-06 13:10:20.032874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.618 [2024-12-06 13:10:20.033231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.618 [2024-12-06 13:10:20.033248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67706 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67706 ']' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67706 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67706 00:16:13.618 killing process with pid 67706 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67706' 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67706 00:16:13.618 [2024-12-06 13:10:20.073083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.618 13:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67706 00:16:13.875 [2024-12-06 13:10:20.326869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:15.250 00:16:15.250 real 0m12.034s 00:16:15.250 user 0m19.847s 00:16:15.250 sys 0m1.741s 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.250 ************************************ 00:16:15.250 END TEST raid_state_function_test 00:16:15.250 ************************************ 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.250 13:10:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:15.250 13:10:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:15.250 13:10:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.250 13:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.250 ************************************ 00:16:15.250 START TEST raid_state_function_test_sb 00:16:15.250 ************************************ 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:15.250 Process raid pid: 68344 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68344 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68344' 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68344 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68344 ']' 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.250 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.250 [2024-12-06 13:10:21.618051] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:15.250 [2024-12-06 13:10:21.618582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.507 [2024-12-06 13:10:21.806633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.507 [2024-12-06 13:10:21.955433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.765 [2024-12-06 13:10:22.183249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.765 [2024-12-06 13:10:22.183558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.331 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.331 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.332 [2024-12-06 13:10:22.579913] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.332 [2024-12-06 13:10:22.580010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.332 [2024-12-06 13:10:22.580029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.332 [2024-12-06 13:10:22.580044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.332 [2024-12-06 13:10:22.580054] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.332 [2024-12-06 13:10:22.580067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.332 "name": "Existed_Raid", 00:16:16.332 "uuid": "60467041-2a87-4989-bcc5-68516c852cda", 00:16:16.332 "strip_size_kb": 0, 00:16:16.332 "state": "configuring", 00:16:16.332 "raid_level": "raid1", 00:16:16.332 "superblock": true, 00:16:16.332 "num_base_bdevs": 3, 00:16:16.332 "num_base_bdevs_discovered": 0, 00:16:16.332 "num_base_bdevs_operational": 3, 00:16:16.332 "base_bdevs_list": [ 00:16:16.332 { 00:16:16.332 "name": "BaseBdev1", 00:16:16.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.332 "is_configured": false, 00:16:16.332 "data_offset": 0, 00:16:16.332 "data_size": 0 00:16:16.332 }, 00:16:16.332 { 00:16:16.332 "name": "BaseBdev2", 00:16:16.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.332 "is_configured": false, 00:16:16.332 "data_offset": 0, 00:16:16.332 "data_size": 0 00:16:16.332 }, 00:16:16.332 { 00:16:16.332 "name": "BaseBdev3", 00:16:16.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.332 "is_configured": false, 00:16:16.332 "data_offset": 0, 00:16:16.332 "data_size": 0 00:16:16.332 } 00:16:16.332 ] 00:16:16.332 }' 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.332 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.590 [2024-12-06 13:10:23.103968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.590 [2024-12-06 13:10:23.104026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.590 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.590 [2024-12-06 13:10:23.111929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.590 [2024-12-06 13:10:23.111988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.590 [2024-12-06 13:10:23.112004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.590 [2024-12-06 13:10:23.112021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.590 [2024-12-06 13:10:23.112031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.590 [2024-12-06 13:10:23.112045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 [2024-12-06 13:10:23.160860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.848 BaseBdev1 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.848 [ 00:16:16.848 { 00:16:16.848 "name": "BaseBdev1", 00:16:16.848 "aliases": [ 00:16:16.848 "daaff0ca-40d1-4321-863d-a553283b7394" 00:16:16.848 ], 00:16:16.848 "product_name": "Malloc disk", 00:16:16.848 "block_size": 512, 00:16:16.848 "num_blocks": 65536, 00:16:16.848 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:16.848 "assigned_rate_limits": { 00:16:16.848 "rw_ios_per_sec": 0, 00:16:16.848 "rw_mbytes_per_sec": 0, 00:16:16.848 "r_mbytes_per_sec": 0, 00:16:16.848 "w_mbytes_per_sec": 0 00:16:16.848 }, 00:16:16.848 "claimed": true, 00:16:16.848 "claim_type": "exclusive_write", 00:16:16.848 "zoned": false, 00:16:16.848 "supported_io_types": { 00:16:16.848 "read": true, 00:16:16.848 "write": true, 00:16:16.848 "unmap": true, 00:16:16.848 "flush": true, 00:16:16.848 "reset": true, 00:16:16.848 "nvme_admin": false, 00:16:16.848 "nvme_io": false, 00:16:16.848 "nvme_io_md": false, 00:16:16.848 "write_zeroes": true, 00:16:16.848 "zcopy": true, 00:16:16.848 "get_zone_info": false, 00:16:16.848 "zone_management": false, 00:16:16.848 "zone_append": false, 00:16:16.848 "compare": false, 00:16:16.848 "compare_and_write": false, 00:16:16.848 "abort": true, 00:16:16.848 "seek_hole": false, 00:16:16.848 "seek_data": false, 00:16:16.848 "copy": true, 00:16:16.848 "nvme_iov_md": false 00:16:16.848 }, 00:16:16.848 "memory_domains": [ 00:16:16.848 { 00:16:16.848 "dma_device_id": "system", 00:16:16.848 "dma_device_type": 1 00:16:16.848 }, 00:16:16.848 { 00:16:16.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.848 "dma_device_type": 2 00:16:16.848 } 00:16:16.848 ], 00:16:16.848 "driver_specific": {} 00:16:16.848 } 00:16:16.848 ] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.848 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.849 "name": "Existed_Raid", 00:16:16.849 "uuid": "c7db5d5e-58fc-40af-9ebe-75600a05b1ca", 00:16:16.849 "strip_size_kb": 0, 00:16:16.849 "state": "configuring", 00:16:16.849 "raid_level": "raid1", 00:16:16.849 "superblock": true, 00:16:16.849 "num_base_bdevs": 3, 00:16:16.849 "num_base_bdevs_discovered": 1, 00:16:16.849 "num_base_bdevs_operational": 3, 00:16:16.849 "base_bdevs_list": [ 00:16:16.849 { 00:16:16.849 "name": "BaseBdev1", 00:16:16.849 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:16.849 "is_configured": true, 00:16:16.849 "data_offset": 2048, 00:16:16.849 "data_size": 63488 00:16:16.849 }, 00:16:16.849 { 00:16:16.849 "name": "BaseBdev2", 00:16:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.849 "is_configured": false, 00:16:16.849 "data_offset": 0, 00:16:16.849 "data_size": 0 00:16:16.849 }, 00:16:16.849 { 00:16:16.849 "name": "BaseBdev3", 00:16:16.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.849 "is_configured": false, 00:16:16.849 "data_offset": 0, 00:16:16.849 "data_size": 0 00:16:16.849 } 00:16:16.849 ] 00:16:16.849 }' 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.849 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 [2024-12-06 13:10:23.709099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.417 [2024-12-06 13:10:23.709176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 [2024-12-06 13:10:23.717139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.417 [2024-12-06 13:10:23.719825] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.417 [2024-12-06 13:10:23.719884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.417 [2024-12-06 13:10:23.719901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.417 [2024-12-06 13:10:23.719917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.417 "name": "Existed_Raid", 00:16:17.417 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:17.417 "strip_size_kb": 0, 00:16:17.417 "state": "configuring", 00:16:17.417 "raid_level": "raid1", 00:16:17.417 "superblock": true, 00:16:17.417 "num_base_bdevs": 3, 00:16:17.417 "num_base_bdevs_discovered": 1, 00:16:17.417 "num_base_bdevs_operational": 3, 00:16:17.417 "base_bdevs_list": [ 00:16:17.417 { 00:16:17.417 "name": "BaseBdev1", 00:16:17.417 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:17.417 "is_configured": true, 00:16:17.417 "data_offset": 2048, 00:16:17.417 "data_size": 63488 00:16:17.417 }, 00:16:17.417 { 00:16:17.417 "name": "BaseBdev2", 00:16:17.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.417 "is_configured": false, 00:16:17.417 "data_offset": 0, 00:16:17.417 "data_size": 0 00:16:17.417 }, 00:16:17.417 { 00:16:17.417 "name": "BaseBdev3", 00:16:17.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.417 "is_configured": false, 00:16:17.417 "data_offset": 0, 00:16:17.417 "data_size": 0 00:16:17.417 } 00:16:17.417 ] 00:16:17.417 }' 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.417 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 [2024-12-06 13:10:24.307143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.984 BaseBdev2 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 [ 00:16:17.984 { 00:16:17.984 "name": "BaseBdev2", 00:16:17.984 "aliases": [ 00:16:17.984 "d178610c-fdba-4c5c-9cef-53aeaf833c8f" 00:16:17.984 ], 00:16:17.984 "product_name": "Malloc disk", 00:16:17.984 "block_size": 512, 00:16:17.984 "num_blocks": 65536, 00:16:17.984 "uuid": "d178610c-fdba-4c5c-9cef-53aeaf833c8f", 00:16:17.984 "assigned_rate_limits": { 00:16:17.984 "rw_ios_per_sec": 0, 00:16:17.984 "rw_mbytes_per_sec": 0, 00:16:17.984 "r_mbytes_per_sec": 0, 00:16:17.984 "w_mbytes_per_sec": 0 00:16:17.984 }, 00:16:17.984 "claimed": true, 00:16:17.984 "claim_type": "exclusive_write", 00:16:17.984 "zoned": false, 00:16:17.984 "supported_io_types": { 00:16:17.984 "read": true, 00:16:17.984 "write": true, 00:16:17.984 "unmap": true, 00:16:17.984 "flush": true, 00:16:17.984 "reset": true, 00:16:17.984 "nvme_admin": false, 00:16:17.984 "nvme_io": false, 00:16:17.984 "nvme_io_md": false, 00:16:17.984 "write_zeroes": true, 00:16:17.984 "zcopy": true, 00:16:17.984 "get_zone_info": false, 00:16:17.984 "zone_management": false, 00:16:17.984 "zone_append": false, 00:16:17.984 "compare": false, 00:16:17.984 "compare_and_write": false, 00:16:17.984 "abort": true, 00:16:17.984 "seek_hole": false, 00:16:17.984 "seek_data": false, 00:16:17.984 "copy": true, 00:16:17.984 "nvme_iov_md": false 00:16:17.984 }, 00:16:17.984 "memory_domains": [ 00:16:17.984 { 00:16:17.984 "dma_device_id": "system", 00:16:17.984 "dma_device_type": 1 00:16:17.984 }, 00:16:17.984 { 00:16:17.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.984 "dma_device_type": 2 00:16:17.984 } 00:16:17.984 ], 00:16:17.984 "driver_specific": {} 00:16:17.984 } 00:16:17.984 ] 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.984 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.985 "name": "Existed_Raid", 00:16:17.985 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:17.985 "strip_size_kb": 0, 00:16:17.985 "state": "configuring", 00:16:17.985 "raid_level": "raid1", 00:16:17.985 "superblock": true, 00:16:17.985 "num_base_bdevs": 3, 00:16:17.985 "num_base_bdevs_discovered": 2, 00:16:17.985 "num_base_bdevs_operational": 3, 00:16:17.985 "base_bdevs_list": [ 00:16:17.985 { 00:16:17.985 "name": "BaseBdev1", 00:16:17.985 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:17.985 "is_configured": true, 00:16:17.985 "data_offset": 2048, 00:16:17.985 "data_size": 63488 00:16:17.985 }, 00:16:17.985 { 00:16:17.985 "name": "BaseBdev2", 00:16:17.985 "uuid": "d178610c-fdba-4c5c-9cef-53aeaf833c8f", 00:16:17.985 "is_configured": true, 00:16:17.985 "data_offset": 2048, 00:16:17.985 "data_size": 63488 00:16:17.985 }, 00:16:17.985 { 00:16:17.985 "name": "BaseBdev3", 00:16:17.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.985 "is_configured": false, 00:16:17.985 "data_offset": 0, 00:16:17.985 "data_size": 0 00:16:17.985 } 00:16:17.985 ] 00:16:17.985 }' 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.985 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.552 [2024-12-06 13:10:24.954652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.552 [2024-12-06 13:10:24.955012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.552 [2024-12-06 13:10:24.955042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.552 BaseBdev3 00:16:18.552 [2024-12-06 13:10:24.955616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:18.552 [2024-12-06 13:10:24.955841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.552 [2024-12-06 13:10:24.955865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:18.552 [2024-12-06 13:10:24.956055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.552 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.552 [ 00:16:18.552 { 00:16:18.552 "name": "BaseBdev3", 00:16:18.552 "aliases": [ 00:16:18.552 "b9f60d2d-a20a-4474-b4f3-db7c56d46666" 00:16:18.552 ], 00:16:18.552 "product_name": "Malloc disk", 00:16:18.552 "block_size": 512, 00:16:18.552 "num_blocks": 65536, 00:16:18.552 "uuid": "b9f60d2d-a20a-4474-b4f3-db7c56d46666", 00:16:18.552 "assigned_rate_limits": { 00:16:18.552 "rw_ios_per_sec": 0, 00:16:18.552 "rw_mbytes_per_sec": 0, 00:16:18.552 "r_mbytes_per_sec": 0, 00:16:18.552 "w_mbytes_per_sec": 0 00:16:18.552 }, 00:16:18.552 "claimed": true, 00:16:18.552 "claim_type": "exclusive_write", 00:16:18.552 "zoned": false, 00:16:18.552 "supported_io_types": { 00:16:18.552 "read": true, 00:16:18.552 "write": true, 00:16:18.552 "unmap": true, 00:16:18.552 "flush": true, 00:16:18.552 "reset": true, 00:16:18.552 "nvme_admin": false, 00:16:18.552 "nvme_io": false, 00:16:18.552 "nvme_io_md": false, 00:16:18.552 "write_zeroes": true, 00:16:18.552 "zcopy": true, 00:16:18.552 "get_zone_info": false, 00:16:18.552 "zone_management": false, 00:16:18.552 "zone_append": false, 00:16:18.552 "compare": false, 00:16:18.552 "compare_and_write": false, 00:16:18.552 "abort": true, 00:16:18.552 "seek_hole": false, 00:16:18.553 "seek_data": false, 00:16:18.553 "copy": true, 00:16:18.553 "nvme_iov_md": false 00:16:18.553 }, 00:16:18.553 "memory_domains": [ 00:16:18.553 { 00:16:18.553 "dma_device_id": "system", 00:16:18.553 "dma_device_type": 1 00:16:18.553 }, 00:16:18.553 { 00:16:18.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.553 "dma_device_type": 2 00:16:18.553 } 00:16:18.553 ], 00:16:18.553 "driver_specific": {} 00:16:18.553 } 00:16:18.553 ] 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.553 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.553 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.553 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.553 "name": "Existed_Raid", 00:16:18.553 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:18.553 "strip_size_kb": 0, 00:16:18.553 "state": "online", 00:16:18.553 "raid_level": "raid1", 00:16:18.553 "superblock": true, 00:16:18.553 "num_base_bdevs": 3, 00:16:18.553 "num_base_bdevs_discovered": 3, 00:16:18.553 "num_base_bdevs_operational": 3, 00:16:18.553 "base_bdevs_list": [ 00:16:18.553 { 00:16:18.553 "name": "BaseBdev1", 00:16:18.553 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:18.553 "is_configured": true, 00:16:18.553 "data_offset": 2048, 00:16:18.553 "data_size": 63488 00:16:18.553 }, 00:16:18.553 { 00:16:18.553 "name": "BaseBdev2", 00:16:18.553 "uuid": "d178610c-fdba-4c5c-9cef-53aeaf833c8f", 00:16:18.553 "is_configured": true, 00:16:18.553 "data_offset": 2048, 00:16:18.553 "data_size": 63488 00:16:18.553 }, 00:16:18.553 { 00:16:18.553 "name": "BaseBdev3", 00:16:18.553 "uuid": "b9f60d2d-a20a-4474-b4f3-db7c56d46666", 00:16:18.553 "is_configured": true, 00:16:18.553 "data_offset": 2048, 00:16:18.553 "data_size": 63488 00:16:18.553 } 00:16:18.553 ] 00:16:18.553 }' 00:16:18.553 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.553 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.119 [2024-12-06 13:10:25.531269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.119 "name": "Existed_Raid", 00:16:19.119 "aliases": [ 00:16:19.119 "9fefbefa-507a-410b-83d4-5d5c4aff7be2" 00:16:19.119 ], 00:16:19.119 "product_name": "Raid Volume", 00:16:19.119 "block_size": 512, 00:16:19.119 "num_blocks": 63488, 00:16:19.119 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:19.119 "assigned_rate_limits": { 00:16:19.119 "rw_ios_per_sec": 0, 00:16:19.119 "rw_mbytes_per_sec": 0, 00:16:19.119 "r_mbytes_per_sec": 0, 00:16:19.119 "w_mbytes_per_sec": 0 00:16:19.119 }, 00:16:19.119 "claimed": false, 00:16:19.119 "zoned": false, 00:16:19.119 "supported_io_types": { 00:16:19.119 "read": true, 00:16:19.119 "write": true, 00:16:19.119 "unmap": false, 00:16:19.119 "flush": false, 00:16:19.119 "reset": true, 00:16:19.119 "nvme_admin": false, 00:16:19.119 "nvme_io": false, 00:16:19.119 "nvme_io_md": false, 00:16:19.119 "write_zeroes": true, 00:16:19.119 "zcopy": false, 00:16:19.119 "get_zone_info": false, 00:16:19.119 "zone_management": false, 00:16:19.119 "zone_append": false, 00:16:19.119 "compare": false, 00:16:19.119 "compare_and_write": false, 00:16:19.119 "abort": false, 00:16:19.119 "seek_hole": false, 00:16:19.119 "seek_data": false, 00:16:19.119 "copy": false, 00:16:19.119 "nvme_iov_md": false 00:16:19.119 }, 00:16:19.119 "memory_domains": [ 00:16:19.119 { 00:16:19.119 "dma_device_id": "system", 00:16:19.119 "dma_device_type": 1 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.119 "dma_device_type": 2 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "dma_device_id": "system", 00:16:19.119 "dma_device_type": 1 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.119 "dma_device_type": 2 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "dma_device_id": "system", 00:16:19.119 "dma_device_type": 1 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.119 "dma_device_type": 2 00:16:19.119 } 00:16:19.119 ], 00:16:19.119 "driver_specific": { 00:16:19.119 "raid": { 00:16:19.119 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:19.119 "strip_size_kb": 0, 00:16:19.119 "state": "online", 00:16:19.119 "raid_level": "raid1", 00:16:19.119 "superblock": true, 00:16:19.119 "num_base_bdevs": 3, 00:16:19.119 "num_base_bdevs_discovered": 3, 00:16:19.119 "num_base_bdevs_operational": 3, 00:16:19.119 "base_bdevs_list": [ 00:16:19.119 { 00:16:19.119 "name": "BaseBdev1", 00:16:19.119 "uuid": "daaff0ca-40d1-4321-863d-a553283b7394", 00:16:19.119 "is_configured": true, 00:16:19.119 "data_offset": 2048, 00:16:19.119 "data_size": 63488 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "name": "BaseBdev2", 00:16:19.119 "uuid": "d178610c-fdba-4c5c-9cef-53aeaf833c8f", 00:16:19.119 "is_configured": true, 00:16:19.119 "data_offset": 2048, 00:16:19.119 "data_size": 63488 00:16:19.119 }, 00:16:19.119 { 00:16:19.119 "name": "BaseBdev3", 00:16:19.119 "uuid": "b9f60d2d-a20a-4474-b4f3-db7c56d46666", 00:16:19.119 "is_configured": true, 00:16:19.119 "data_offset": 2048, 00:16:19.119 "data_size": 63488 00:16:19.119 } 00:16:19.119 ] 00:16:19.119 } 00:16:19.119 } 00:16:19.119 }' 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:19.119 BaseBdev2 00:16:19.119 BaseBdev3' 00:16:19.119 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.377 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 [2024-12-06 13:10:25.855039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.635 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.635 "name": "Existed_Raid", 00:16:19.635 "uuid": "9fefbefa-507a-410b-83d4-5d5c4aff7be2", 00:16:19.635 "strip_size_kb": 0, 00:16:19.635 "state": "online", 00:16:19.635 "raid_level": "raid1", 00:16:19.635 "superblock": true, 00:16:19.635 "num_base_bdevs": 3, 00:16:19.635 "num_base_bdevs_discovered": 2, 00:16:19.635 "num_base_bdevs_operational": 2, 00:16:19.635 "base_bdevs_list": [ 00:16:19.635 { 00:16:19.635 "name": null, 00:16:19.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.635 "is_configured": false, 00:16:19.635 "data_offset": 0, 00:16:19.635 "data_size": 63488 00:16:19.635 }, 00:16:19.635 { 00:16:19.635 "name": "BaseBdev2", 00:16:19.635 "uuid": "d178610c-fdba-4c5c-9cef-53aeaf833c8f", 00:16:19.635 "is_configured": true, 00:16:19.635 "data_offset": 2048, 00:16:19.635 "data_size": 63488 00:16:19.635 }, 00:16:19.635 { 00:16:19.635 "name": "BaseBdev3", 00:16:19.635 "uuid": "b9f60d2d-a20a-4474-b4f3-db7c56d46666", 00:16:19.635 "is_configured": true, 00:16:19.635 "data_offset": 2048, 00:16:19.635 "data_size": 63488 00:16:19.635 } 00:16:19.635 ] 00:16:19.635 }' 00:16:19.635 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.635 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 [2024-12-06 13:10:26.582302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.203 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 [2024-12-06 13:10:26.729753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.462 [2024-12-06 13:10:26.729911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.462 [2024-12-06 13:10:26.821579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.462 [2024-12-06 13:10:26.821672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.462 [2024-12-06 13:10:26.821694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 BaseBdev2 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 [ 00:16:20.462 { 00:16:20.462 "name": "BaseBdev2", 00:16:20.462 "aliases": [ 00:16:20.462 "06c311e6-ad21-4669-b20c-bda75e68b94d" 00:16:20.462 ], 00:16:20.462 "product_name": "Malloc disk", 00:16:20.462 "block_size": 512, 00:16:20.462 "num_blocks": 65536, 00:16:20.462 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:20.462 "assigned_rate_limits": { 00:16:20.462 "rw_ios_per_sec": 0, 00:16:20.462 "rw_mbytes_per_sec": 0, 00:16:20.462 "r_mbytes_per_sec": 0, 00:16:20.462 "w_mbytes_per_sec": 0 00:16:20.462 }, 00:16:20.462 "claimed": false, 00:16:20.462 "zoned": false, 00:16:20.462 "supported_io_types": { 00:16:20.462 "read": true, 00:16:20.462 "write": true, 00:16:20.462 "unmap": true, 00:16:20.462 "flush": true, 00:16:20.462 "reset": true, 00:16:20.462 "nvme_admin": false, 00:16:20.462 "nvme_io": false, 00:16:20.462 "nvme_io_md": false, 00:16:20.462 "write_zeroes": true, 00:16:20.462 "zcopy": true, 00:16:20.462 "get_zone_info": false, 00:16:20.462 "zone_management": false, 00:16:20.462 "zone_append": false, 00:16:20.462 "compare": false, 00:16:20.462 "compare_and_write": false, 00:16:20.462 "abort": true, 00:16:20.462 "seek_hole": false, 00:16:20.462 "seek_data": false, 00:16:20.462 "copy": true, 00:16:20.462 "nvme_iov_md": false 00:16:20.462 }, 00:16:20.462 "memory_domains": [ 00:16:20.462 { 00:16:20.462 "dma_device_id": "system", 00:16:20.462 "dma_device_type": 1 00:16:20.462 }, 00:16:20.462 { 00:16:20.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.462 "dma_device_type": 2 00:16:20.462 } 00:16:20.462 ], 00:16:20.462 "driver_specific": {} 00:16:20.462 } 00:16:20.462 ] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 BaseBdev3 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 [ 00:16:20.721 { 00:16:20.721 "name": "BaseBdev3", 00:16:20.721 "aliases": [ 00:16:20.721 "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1" 00:16:20.721 ], 00:16:20.721 "product_name": "Malloc disk", 00:16:20.721 "block_size": 512, 00:16:20.721 "num_blocks": 65536, 00:16:20.721 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:20.721 "assigned_rate_limits": { 00:16:20.721 "rw_ios_per_sec": 0, 00:16:20.721 "rw_mbytes_per_sec": 0, 00:16:20.721 "r_mbytes_per_sec": 0, 00:16:20.721 "w_mbytes_per_sec": 0 00:16:20.721 }, 00:16:20.721 "claimed": false, 00:16:20.721 "zoned": false, 00:16:20.721 "supported_io_types": { 00:16:20.721 "read": true, 00:16:20.721 "write": true, 00:16:20.721 "unmap": true, 00:16:20.721 "flush": true, 00:16:20.721 "reset": true, 00:16:20.721 "nvme_admin": false, 00:16:20.721 "nvme_io": false, 00:16:20.721 "nvme_io_md": false, 00:16:20.721 "write_zeroes": true, 00:16:20.721 "zcopy": true, 00:16:20.721 "get_zone_info": false, 00:16:20.721 "zone_management": false, 00:16:20.721 "zone_append": false, 00:16:20.721 "compare": false, 00:16:20.721 "compare_and_write": false, 00:16:20.721 "abort": true, 00:16:20.721 "seek_hole": false, 00:16:20.721 "seek_data": false, 00:16:20.721 "copy": true, 00:16:20.721 "nvme_iov_md": false 00:16:20.721 }, 00:16:20.721 "memory_domains": [ 00:16:20.721 { 00:16:20.721 "dma_device_id": "system", 00:16:20.721 "dma_device_type": 1 00:16:20.721 }, 00:16:20.721 { 00:16:20.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.721 "dma_device_type": 2 00:16:20.721 } 00:16:20.721 ], 00:16:20.721 "driver_specific": {} 00:16:20.721 } 00:16:20.721 ] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 [2024-12-06 13:10:27.040245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.721 [2024-12-06 13:10:27.040327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.721 [2024-12-06 13:10:27.040362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.721 [2024-12-06 13:10:27.043008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.721 "name": "Existed_Raid", 00:16:20.721 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:20.721 "strip_size_kb": 0, 00:16:20.721 "state": "configuring", 00:16:20.721 "raid_level": "raid1", 00:16:20.721 "superblock": true, 00:16:20.721 "num_base_bdevs": 3, 00:16:20.721 "num_base_bdevs_discovered": 2, 00:16:20.721 "num_base_bdevs_operational": 3, 00:16:20.721 "base_bdevs_list": [ 00:16:20.721 { 00:16:20.721 "name": "BaseBdev1", 00:16:20.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.721 "is_configured": false, 00:16:20.721 "data_offset": 0, 00:16:20.721 "data_size": 0 00:16:20.721 }, 00:16:20.721 { 00:16:20.722 "name": "BaseBdev2", 00:16:20.722 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:20.722 "is_configured": true, 00:16:20.722 "data_offset": 2048, 00:16:20.722 "data_size": 63488 00:16:20.722 }, 00:16:20.722 { 00:16:20.722 "name": "BaseBdev3", 00:16:20.722 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:20.722 "is_configured": true, 00:16:20.722 "data_offset": 2048, 00:16:20.722 "data_size": 63488 00:16:20.722 } 00:16:20.722 ] 00:16:20.722 }' 00:16:20.722 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.722 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.290 [2024-12-06 13:10:27.540401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.290 "name": "Existed_Raid", 00:16:21.290 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:21.290 "strip_size_kb": 0, 00:16:21.290 "state": "configuring", 00:16:21.290 "raid_level": "raid1", 00:16:21.290 "superblock": true, 00:16:21.290 "num_base_bdevs": 3, 00:16:21.290 "num_base_bdevs_discovered": 1, 00:16:21.290 "num_base_bdevs_operational": 3, 00:16:21.290 "base_bdevs_list": [ 00:16:21.290 { 00:16:21.290 "name": "BaseBdev1", 00:16:21.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.290 "is_configured": false, 00:16:21.290 "data_offset": 0, 00:16:21.290 "data_size": 0 00:16:21.290 }, 00:16:21.290 { 00:16:21.290 "name": null, 00:16:21.290 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:21.290 "is_configured": false, 00:16:21.290 "data_offset": 0, 00:16:21.290 "data_size": 63488 00:16:21.290 }, 00:16:21.290 { 00:16:21.290 "name": "BaseBdev3", 00:16:21.290 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:21.290 "is_configured": true, 00:16:21.290 "data_offset": 2048, 00:16:21.290 "data_size": 63488 00:16:21.290 } 00:16:21.290 ] 00:16:21.290 }' 00:16:21.290 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.291 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.550 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.550 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.550 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.550 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.550 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.808 [2024-12-06 13:10:28.138090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.808 BaseBdev1 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.808 [ 00:16:21.808 { 00:16:21.808 "name": "BaseBdev1", 00:16:21.808 "aliases": [ 00:16:21.808 "367935f1-4a64-4e44-87e3-235f9bf081fb" 00:16:21.808 ], 00:16:21.808 "product_name": "Malloc disk", 00:16:21.808 "block_size": 512, 00:16:21.808 "num_blocks": 65536, 00:16:21.808 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:21.808 "assigned_rate_limits": { 00:16:21.808 "rw_ios_per_sec": 0, 00:16:21.808 "rw_mbytes_per_sec": 0, 00:16:21.808 "r_mbytes_per_sec": 0, 00:16:21.808 "w_mbytes_per_sec": 0 00:16:21.808 }, 00:16:21.808 "claimed": true, 00:16:21.808 "claim_type": "exclusive_write", 00:16:21.808 "zoned": false, 00:16:21.808 "supported_io_types": { 00:16:21.808 "read": true, 00:16:21.808 "write": true, 00:16:21.808 "unmap": true, 00:16:21.808 "flush": true, 00:16:21.808 "reset": true, 00:16:21.808 "nvme_admin": false, 00:16:21.808 "nvme_io": false, 00:16:21.808 "nvme_io_md": false, 00:16:21.808 "write_zeroes": true, 00:16:21.808 "zcopy": true, 00:16:21.808 "get_zone_info": false, 00:16:21.808 "zone_management": false, 00:16:21.808 "zone_append": false, 00:16:21.808 "compare": false, 00:16:21.808 "compare_and_write": false, 00:16:21.808 "abort": true, 00:16:21.808 "seek_hole": false, 00:16:21.808 "seek_data": false, 00:16:21.808 "copy": true, 00:16:21.808 "nvme_iov_md": false 00:16:21.808 }, 00:16:21.808 "memory_domains": [ 00:16:21.808 { 00:16:21.808 "dma_device_id": "system", 00:16:21.808 "dma_device_type": 1 00:16:21.808 }, 00:16:21.808 { 00:16:21.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.808 "dma_device_type": 2 00:16:21.808 } 00:16:21.808 ], 00:16:21.808 "driver_specific": {} 00:16:21.808 } 00:16:21.808 ] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.808 "name": "Existed_Raid", 00:16:21.808 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:21.808 "strip_size_kb": 0, 00:16:21.808 "state": "configuring", 00:16:21.808 "raid_level": "raid1", 00:16:21.808 "superblock": true, 00:16:21.808 "num_base_bdevs": 3, 00:16:21.809 "num_base_bdevs_discovered": 2, 00:16:21.809 "num_base_bdevs_operational": 3, 00:16:21.809 "base_bdevs_list": [ 00:16:21.809 { 00:16:21.809 "name": "BaseBdev1", 00:16:21.809 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:21.809 "is_configured": true, 00:16:21.809 "data_offset": 2048, 00:16:21.809 "data_size": 63488 00:16:21.809 }, 00:16:21.809 { 00:16:21.809 "name": null, 00:16:21.809 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:21.809 "is_configured": false, 00:16:21.809 "data_offset": 0, 00:16:21.809 "data_size": 63488 00:16:21.809 }, 00:16:21.809 { 00:16:21.809 "name": "BaseBdev3", 00:16:21.809 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:21.809 "is_configured": true, 00:16:21.809 "data_offset": 2048, 00:16:21.809 "data_size": 63488 00:16:21.809 } 00:16:21.809 ] 00:16:21.809 }' 00:16:21.809 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.809 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.377 [2024-12-06 13:10:28.738353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.377 "name": "Existed_Raid", 00:16:22.377 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:22.377 "strip_size_kb": 0, 00:16:22.377 "state": "configuring", 00:16:22.377 "raid_level": "raid1", 00:16:22.377 "superblock": true, 00:16:22.377 "num_base_bdevs": 3, 00:16:22.377 "num_base_bdevs_discovered": 1, 00:16:22.377 "num_base_bdevs_operational": 3, 00:16:22.377 "base_bdevs_list": [ 00:16:22.377 { 00:16:22.377 "name": "BaseBdev1", 00:16:22.377 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:22.377 "is_configured": true, 00:16:22.377 "data_offset": 2048, 00:16:22.377 "data_size": 63488 00:16:22.377 }, 00:16:22.377 { 00:16:22.377 "name": null, 00:16:22.377 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:22.377 "is_configured": false, 00:16:22.377 "data_offset": 0, 00:16:22.377 "data_size": 63488 00:16:22.377 }, 00:16:22.377 { 00:16:22.377 "name": null, 00:16:22.377 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:22.377 "is_configured": false, 00:16:22.377 "data_offset": 0, 00:16:22.377 "data_size": 63488 00:16:22.377 } 00:16:22.377 ] 00:16:22.377 }' 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.377 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.943 [2024-12-06 13:10:29.322600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.943 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.944 "name": "Existed_Raid", 00:16:22.944 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:22.944 "strip_size_kb": 0, 00:16:22.944 "state": "configuring", 00:16:22.944 "raid_level": "raid1", 00:16:22.944 "superblock": true, 00:16:22.944 "num_base_bdevs": 3, 00:16:22.944 "num_base_bdevs_discovered": 2, 00:16:22.944 "num_base_bdevs_operational": 3, 00:16:22.944 "base_bdevs_list": [ 00:16:22.944 { 00:16:22.944 "name": "BaseBdev1", 00:16:22.944 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:22.944 "is_configured": true, 00:16:22.944 "data_offset": 2048, 00:16:22.944 "data_size": 63488 00:16:22.944 }, 00:16:22.944 { 00:16:22.944 "name": null, 00:16:22.944 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:22.944 "is_configured": false, 00:16:22.944 "data_offset": 0, 00:16:22.944 "data_size": 63488 00:16:22.944 }, 00:16:22.944 { 00:16:22.944 "name": "BaseBdev3", 00:16:22.944 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:22.944 "is_configured": true, 00:16:22.944 "data_offset": 2048, 00:16:22.944 "data_size": 63488 00:16:22.944 } 00:16:22.944 ] 00:16:22.944 }' 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.944 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.511 [2024-12-06 13:10:29.902738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.511 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.511 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.511 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.511 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.511 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.770 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.770 "name": "Existed_Raid", 00:16:23.770 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:23.770 "strip_size_kb": 0, 00:16:23.770 "state": "configuring", 00:16:23.770 "raid_level": "raid1", 00:16:23.770 "superblock": true, 00:16:23.770 "num_base_bdevs": 3, 00:16:23.770 "num_base_bdevs_discovered": 1, 00:16:23.770 "num_base_bdevs_operational": 3, 00:16:23.770 "base_bdevs_list": [ 00:16:23.770 { 00:16:23.770 "name": null, 00:16:23.770 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:23.770 "is_configured": false, 00:16:23.770 "data_offset": 0, 00:16:23.770 "data_size": 63488 00:16:23.770 }, 00:16:23.770 { 00:16:23.770 "name": null, 00:16:23.770 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:23.770 "is_configured": false, 00:16:23.770 "data_offset": 0, 00:16:23.770 "data_size": 63488 00:16:23.770 }, 00:16:23.770 { 00:16:23.770 "name": "BaseBdev3", 00:16:23.770 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:23.770 "is_configured": true, 00:16:23.770 "data_offset": 2048, 00:16:23.770 "data_size": 63488 00:16:23.770 } 00:16:23.770 ] 00:16:23.770 }' 00:16:23.770 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.770 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.029 [2024-12-06 13:10:30.546996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.029 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.289 "name": "Existed_Raid", 00:16:24.289 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:24.289 "strip_size_kb": 0, 00:16:24.289 "state": "configuring", 00:16:24.289 "raid_level": "raid1", 00:16:24.289 "superblock": true, 00:16:24.289 "num_base_bdevs": 3, 00:16:24.289 "num_base_bdevs_discovered": 2, 00:16:24.289 "num_base_bdevs_operational": 3, 00:16:24.289 "base_bdevs_list": [ 00:16:24.289 { 00:16:24.289 "name": null, 00:16:24.289 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:24.289 "is_configured": false, 00:16:24.289 "data_offset": 0, 00:16:24.289 "data_size": 63488 00:16:24.289 }, 00:16:24.289 { 00:16:24.289 "name": "BaseBdev2", 00:16:24.289 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:24.289 "is_configured": true, 00:16:24.289 "data_offset": 2048, 00:16:24.289 "data_size": 63488 00:16:24.289 }, 00:16:24.289 { 00:16:24.289 "name": "BaseBdev3", 00:16:24.289 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:24.289 "is_configured": true, 00:16:24.289 "data_offset": 2048, 00:16:24.289 "data_size": 63488 00:16:24.289 } 00:16:24.289 ] 00:16:24.289 }' 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.289 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.550 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.550 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.550 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.550 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 367935f1-4a64-4e44-87e3-235f9bf081fb 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.825 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.825 [2024-12-06 13:10:31.220923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:24.825 [2024-12-06 13:10:31.221279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:24.825 [2024-12-06 13:10:31.221311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.826 [2024-12-06 13:10:31.221680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:24.826 NewBaseBdev 00:16:24.826 [2024-12-06 13:10:31.221888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:24.826 [2024-12-06 13:10:31.221910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:24.826 [2024-12-06 13:10:31.222081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 [ 00:16:24.826 { 00:16:24.826 "name": "NewBaseBdev", 00:16:24.826 "aliases": [ 00:16:24.826 "367935f1-4a64-4e44-87e3-235f9bf081fb" 00:16:24.826 ], 00:16:24.826 "product_name": "Malloc disk", 00:16:24.826 "block_size": 512, 00:16:24.826 "num_blocks": 65536, 00:16:24.826 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:24.826 "assigned_rate_limits": { 00:16:24.826 "rw_ios_per_sec": 0, 00:16:24.826 "rw_mbytes_per_sec": 0, 00:16:24.826 "r_mbytes_per_sec": 0, 00:16:24.826 "w_mbytes_per_sec": 0 00:16:24.826 }, 00:16:24.826 "claimed": true, 00:16:24.826 "claim_type": "exclusive_write", 00:16:24.826 "zoned": false, 00:16:24.826 "supported_io_types": { 00:16:24.826 "read": true, 00:16:24.826 "write": true, 00:16:24.826 "unmap": true, 00:16:24.826 "flush": true, 00:16:24.826 "reset": true, 00:16:24.826 "nvme_admin": false, 00:16:24.826 "nvme_io": false, 00:16:24.826 "nvme_io_md": false, 00:16:24.826 "write_zeroes": true, 00:16:24.826 "zcopy": true, 00:16:24.826 "get_zone_info": false, 00:16:24.826 "zone_management": false, 00:16:24.826 "zone_append": false, 00:16:24.826 "compare": false, 00:16:24.826 "compare_and_write": false, 00:16:24.826 "abort": true, 00:16:24.826 "seek_hole": false, 00:16:24.826 "seek_data": false, 00:16:24.826 "copy": true, 00:16:24.826 "nvme_iov_md": false 00:16:24.826 }, 00:16:24.826 "memory_domains": [ 00:16:24.826 { 00:16:24.826 "dma_device_id": "system", 00:16:24.826 "dma_device_type": 1 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.826 "dma_device_type": 2 00:16:24.826 } 00:16:24.826 ], 00:16:24.826 "driver_specific": {} 00:16:24.826 } 00:16:24.826 ] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.826 "name": "Existed_Raid", 00:16:24.826 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:24.826 "strip_size_kb": 0, 00:16:24.826 "state": "online", 00:16:24.826 "raid_level": "raid1", 00:16:24.826 "superblock": true, 00:16:24.826 "num_base_bdevs": 3, 00:16:24.826 "num_base_bdevs_discovered": 3, 00:16:24.826 "num_base_bdevs_operational": 3, 00:16:24.826 "base_bdevs_list": [ 00:16:24.826 { 00:16:24.826 "name": "NewBaseBdev", 00:16:24.826 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 2048, 00:16:24.826 "data_size": 63488 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "name": "BaseBdev2", 00:16:24.826 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 2048, 00:16:24.826 "data_size": 63488 00:16:24.826 }, 00:16:24.826 { 00:16:24.826 "name": "BaseBdev3", 00:16:24.826 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:24.826 "is_configured": true, 00:16:24.826 "data_offset": 2048, 00:16:24.826 "data_size": 63488 00:16:24.826 } 00:16:24.826 ] 00:16:24.826 }' 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.826 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 [2024-12-06 13:10:31.753573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.394 "name": "Existed_Raid", 00:16:25.394 "aliases": [ 00:16:25.394 "ef8de268-5039-4807-8d25-3e5489c7c52d" 00:16:25.394 ], 00:16:25.394 "product_name": "Raid Volume", 00:16:25.394 "block_size": 512, 00:16:25.394 "num_blocks": 63488, 00:16:25.394 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:25.394 "assigned_rate_limits": { 00:16:25.394 "rw_ios_per_sec": 0, 00:16:25.394 "rw_mbytes_per_sec": 0, 00:16:25.394 "r_mbytes_per_sec": 0, 00:16:25.394 "w_mbytes_per_sec": 0 00:16:25.394 }, 00:16:25.394 "claimed": false, 00:16:25.394 "zoned": false, 00:16:25.394 "supported_io_types": { 00:16:25.394 "read": true, 00:16:25.394 "write": true, 00:16:25.394 "unmap": false, 00:16:25.394 "flush": false, 00:16:25.394 "reset": true, 00:16:25.394 "nvme_admin": false, 00:16:25.394 "nvme_io": false, 00:16:25.394 "nvme_io_md": false, 00:16:25.394 "write_zeroes": true, 00:16:25.394 "zcopy": false, 00:16:25.394 "get_zone_info": false, 00:16:25.394 "zone_management": false, 00:16:25.394 "zone_append": false, 00:16:25.394 "compare": false, 00:16:25.394 "compare_and_write": false, 00:16:25.394 "abort": false, 00:16:25.394 "seek_hole": false, 00:16:25.394 "seek_data": false, 00:16:25.394 "copy": false, 00:16:25.394 "nvme_iov_md": false 00:16:25.394 }, 00:16:25.394 "memory_domains": [ 00:16:25.394 { 00:16:25.394 "dma_device_id": "system", 00:16:25.394 "dma_device_type": 1 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.394 "dma_device_type": 2 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "dma_device_id": "system", 00:16:25.394 "dma_device_type": 1 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.394 "dma_device_type": 2 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "dma_device_id": "system", 00:16:25.394 "dma_device_type": 1 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.394 "dma_device_type": 2 00:16:25.394 } 00:16:25.394 ], 00:16:25.394 "driver_specific": { 00:16:25.394 "raid": { 00:16:25.394 "uuid": "ef8de268-5039-4807-8d25-3e5489c7c52d", 00:16:25.394 "strip_size_kb": 0, 00:16:25.394 "state": "online", 00:16:25.394 "raid_level": "raid1", 00:16:25.394 "superblock": true, 00:16:25.394 "num_base_bdevs": 3, 00:16:25.394 "num_base_bdevs_discovered": 3, 00:16:25.394 "num_base_bdevs_operational": 3, 00:16:25.394 "base_bdevs_list": [ 00:16:25.394 { 00:16:25.394 "name": "NewBaseBdev", 00:16:25.394 "uuid": "367935f1-4a64-4e44-87e3-235f9bf081fb", 00:16:25.394 "is_configured": true, 00:16:25.394 "data_offset": 2048, 00:16:25.394 "data_size": 63488 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "name": "BaseBdev2", 00:16:25.394 "uuid": "06c311e6-ad21-4669-b20c-bda75e68b94d", 00:16:25.394 "is_configured": true, 00:16:25.394 "data_offset": 2048, 00:16:25.394 "data_size": 63488 00:16:25.394 }, 00:16:25.394 { 00:16:25.394 "name": "BaseBdev3", 00:16:25.394 "uuid": "42f14ae7-dbe0-49f9-9eb3-f496ae0c8bf1", 00:16:25.394 "is_configured": true, 00:16:25.394 "data_offset": 2048, 00:16:25.394 "data_size": 63488 00:16:25.394 } 00:16:25.394 ] 00:16:25.394 } 00:16:25.394 } 00:16:25.394 }' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.394 BaseBdev2 00:16:25.394 BaseBdev3' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.654 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 [2024-12-06 13:10:32.053222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.654 [2024-12-06 13:10:32.053287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.654 [2024-12-06 13:10:32.053388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.654 [2024-12-06 13:10:32.053855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.654 [2024-12-06 13:10:32.053883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68344 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68344 ']' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68344 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68344 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68344' 00:16:25.654 killing process with pid 68344 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68344 00:16:25.654 [2024-12-06 13:10:32.087838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.654 13:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68344 00:16:25.913 [2024-12-06 13:10:32.349979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.289 13:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.289 00:16:27.289 real 0m11.978s 00:16:27.289 user 0m19.713s 00:16:27.289 sys 0m1.743s 00:16:27.289 13:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.289 13:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 ************************************ 00:16:27.289 END TEST raid_state_function_test_sb 00:16:27.289 ************************************ 00:16:27.289 13:10:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:27.289 13:10:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:27.289 13:10:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.289 13:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 ************************************ 00:16:27.289 START TEST raid_superblock_test 00:16:27.289 ************************************ 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68976 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68976 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68976 ']' 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.289 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.289 [2024-12-06 13:10:33.634669] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:27.289 [2024-12-06 13:10:33.634870] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68976 ] 00:16:27.289 [2024-12-06 13:10:33.812097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.548 [2024-12-06 13:10:33.961659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.807 [2024-12-06 13:10:34.189997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.807 [2024-12-06 13:10:34.190063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 malloc1 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 [2024-12-06 13:10:34.689300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.391 [2024-12-06 13:10:34.689409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.391 [2024-12-06 13:10:34.689444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.391 [2024-12-06 13:10:34.689473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.391 [2024-12-06 13:10:34.692432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.391 [2024-12-06 13:10:34.692528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.391 pt1 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 malloc2 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 [2024-12-06 13:10:34.750381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.391 [2024-12-06 13:10:34.750482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.391 [2024-12-06 13:10:34.750520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.391 [2024-12-06 13:10:34.750536] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.391 [2024-12-06 13:10:34.753428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.391 [2024-12-06 13:10:34.753485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.391 pt2 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.392 malloc3 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.392 [2024-12-06 13:10:34.819093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.392 [2024-12-06 13:10:34.819224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.392 [2024-12-06 13:10:34.819259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:28.392 [2024-12-06 13:10:34.819276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.392 [2024-12-06 13:10:34.822356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.392 [2024-12-06 13:10:34.822405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.392 pt3 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.392 [2024-12-06 13:10:34.827269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.392 [2024-12-06 13:10:34.829913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.392 [2024-12-06 13:10:34.830025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.392 [2024-12-06 13:10:34.830299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:28.392 [2024-12-06 13:10:34.830339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:28.392 [2024-12-06 13:10:34.830665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:28.392 [2024-12-06 13:10:34.830949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:28.392 [2024-12-06 13:10:34.830979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:28.392 [2024-12-06 13:10:34.831236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.392 "name": "raid_bdev1", 00:16:28.392 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:28.392 "strip_size_kb": 0, 00:16:28.392 "state": "online", 00:16:28.392 "raid_level": "raid1", 00:16:28.392 "superblock": true, 00:16:28.392 "num_base_bdevs": 3, 00:16:28.392 "num_base_bdevs_discovered": 3, 00:16:28.392 "num_base_bdevs_operational": 3, 00:16:28.392 "base_bdevs_list": [ 00:16:28.392 { 00:16:28.392 "name": "pt1", 00:16:28.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.392 "is_configured": true, 00:16:28.392 "data_offset": 2048, 00:16:28.392 "data_size": 63488 00:16:28.392 }, 00:16:28.392 { 00:16:28.392 "name": "pt2", 00:16:28.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.392 "is_configured": true, 00:16:28.392 "data_offset": 2048, 00:16:28.392 "data_size": 63488 00:16:28.392 }, 00:16:28.392 { 00:16:28.392 "name": "pt3", 00:16:28.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.392 "is_configured": true, 00:16:28.392 "data_offset": 2048, 00:16:28.392 "data_size": 63488 00:16:28.392 } 00:16:28.392 ] 00:16:28.392 }' 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.392 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.016 [2024-12-06 13:10:35.399956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.016 "name": "raid_bdev1", 00:16:29.016 "aliases": [ 00:16:29.016 "ca9b89b9-fc3e-4821-a4aa-a058a2e04053" 00:16:29.016 ], 00:16:29.016 "product_name": "Raid Volume", 00:16:29.016 "block_size": 512, 00:16:29.016 "num_blocks": 63488, 00:16:29.016 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:29.016 "assigned_rate_limits": { 00:16:29.016 "rw_ios_per_sec": 0, 00:16:29.016 "rw_mbytes_per_sec": 0, 00:16:29.016 "r_mbytes_per_sec": 0, 00:16:29.016 "w_mbytes_per_sec": 0 00:16:29.016 }, 00:16:29.016 "claimed": false, 00:16:29.016 "zoned": false, 00:16:29.016 "supported_io_types": { 00:16:29.016 "read": true, 00:16:29.016 "write": true, 00:16:29.016 "unmap": false, 00:16:29.016 "flush": false, 00:16:29.016 "reset": true, 00:16:29.016 "nvme_admin": false, 00:16:29.016 "nvme_io": false, 00:16:29.016 "nvme_io_md": false, 00:16:29.016 "write_zeroes": true, 00:16:29.016 "zcopy": false, 00:16:29.016 "get_zone_info": false, 00:16:29.016 "zone_management": false, 00:16:29.016 "zone_append": false, 00:16:29.016 "compare": false, 00:16:29.016 "compare_and_write": false, 00:16:29.016 "abort": false, 00:16:29.016 "seek_hole": false, 00:16:29.016 "seek_data": false, 00:16:29.016 "copy": false, 00:16:29.016 "nvme_iov_md": false 00:16:29.016 }, 00:16:29.016 "memory_domains": [ 00:16:29.016 { 00:16:29.016 "dma_device_id": "system", 00:16:29.016 "dma_device_type": 1 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.016 "dma_device_type": 2 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "dma_device_id": "system", 00:16:29.016 "dma_device_type": 1 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.016 "dma_device_type": 2 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "dma_device_id": "system", 00:16:29.016 "dma_device_type": 1 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.016 "dma_device_type": 2 00:16:29.016 } 00:16:29.016 ], 00:16:29.016 "driver_specific": { 00:16:29.016 "raid": { 00:16:29.016 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:29.016 "strip_size_kb": 0, 00:16:29.016 "state": "online", 00:16:29.016 "raid_level": "raid1", 00:16:29.016 "superblock": true, 00:16:29.016 "num_base_bdevs": 3, 00:16:29.016 "num_base_bdevs_discovered": 3, 00:16:29.016 "num_base_bdevs_operational": 3, 00:16:29.016 "base_bdevs_list": [ 00:16:29.016 { 00:16:29.016 "name": "pt1", 00:16:29.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.016 "is_configured": true, 00:16:29.016 "data_offset": 2048, 00:16:29.016 "data_size": 63488 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "name": "pt2", 00:16:29.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.016 "is_configured": true, 00:16:29.016 "data_offset": 2048, 00:16:29.016 "data_size": 63488 00:16:29.016 }, 00:16:29.016 { 00:16:29.016 "name": "pt3", 00:16:29.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.016 "is_configured": true, 00:16:29.016 "data_offset": 2048, 00:16:29.016 "data_size": 63488 00:16:29.016 } 00:16:29.016 ] 00:16:29.016 } 00:16:29.016 } 00:16:29.016 }' 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:29.016 pt2 00:16:29.016 pt3' 00:16:29.016 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 [2024-12-06 13:10:35.711946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ca9b89b9-fc3e-4821-a4aa-a058a2e04053 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ca9b89b9-fc3e-4821-a4aa-a058a2e04053 ']' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 [2024-12-06 13:10:35.763603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.314 [2024-12-06 13:10:35.763647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.314 [2024-12-06 13:10:35.763762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.314 [2024-12-06 13:10:35.763873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.314 [2024-12-06 13:10:35.763891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.314 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.573 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.574 [2024-12-06 13:10:35.911718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:29.574 [2024-12-06 13:10:35.914409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:29.574 [2024-12-06 13:10:35.914517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:29.574 [2024-12-06 13:10:35.914599] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:29.574 [2024-12-06 13:10:35.914681] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:29.574 [2024-12-06 13:10:35.914717] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:29.574 [2024-12-06 13:10:35.914746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.574 [2024-12-06 13:10:35.914760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:29.574 request: 00:16:29.574 { 00:16:29.574 "name": "raid_bdev1", 00:16:29.574 "raid_level": "raid1", 00:16:29.574 "base_bdevs": [ 00:16:29.574 "malloc1", 00:16:29.574 "malloc2", 00:16:29.574 "malloc3" 00:16:29.574 ], 00:16:29.574 "superblock": false, 00:16:29.574 "method": "bdev_raid_create", 00:16:29.574 "req_id": 1 00:16:29.574 } 00:16:29.574 Got JSON-RPC error response 00:16:29.574 response: 00:16:29.574 { 00:16:29.574 "code": -17, 00:16:29.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:29.574 } 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.574 [2024-12-06 13:10:35.971674] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.574 [2024-12-06 13:10:35.971925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.574 [2024-12-06 13:10:35.972157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:29.574 [2024-12-06 13:10:35.972278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.574 [2024-12-06 13:10:35.975821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.574 [2024-12-06 13:10:35.975992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.574 [2024-12-06 13:10:35.976525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.574 [2024-12-06 13:10:35.976739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.574 pt1 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.574 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.574 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.574 "name": "raid_bdev1", 00:16:29.574 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:29.574 "strip_size_kb": 0, 00:16:29.574 "state": "configuring", 00:16:29.574 "raid_level": "raid1", 00:16:29.574 "superblock": true, 00:16:29.574 "num_base_bdevs": 3, 00:16:29.574 "num_base_bdevs_discovered": 1, 00:16:29.574 "num_base_bdevs_operational": 3, 00:16:29.574 "base_bdevs_list": [ 00:16:29.574 { 00:16:29.574 "name": "pt1", 00:16:29.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.574 "is_configured": true, 00:16:29.574 "data_offset": 2048, 00:16:29.574 "data_size": 63488 00:16:29.574 }, 00:16:29.574 { 00:16:29.574 "name": null, 00:16:29.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.574 "is_configured": false, 00:16:29.574 "data_offset": 2048, 00:16:29.574 "data_size": 63488 00:16:29.574 }, 00:16:29.574 { 00:16:29.574 "name": null, 00:16:29.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.574 "is_configured": false, 00:16:29.574 "data_offset": 2048, 00:16:29.574 "data_size": 63488 00:16:29.574 } 00:16:29.574 ] 00:16:29.574 }' 00:16:29.574 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.574 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.140 [2024-12-06 13:10:36.528866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.140 [2024-12-06 13:10:36.528992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.140 [2024-12-06 13:10:36.529029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:30.140 [2024-12-06 13:10:36.529046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.140 [2024-12-06 13:10:36.529724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.140 [2024-12-06 13:10:36.529759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.140 [2024-12-06 13:10:36.529911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.140 [2024-12-06 13:10:36.529946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.140 pt2 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.140 [2024-12-06 13:10:36.536846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.140 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.141 "name": "raid_bdev1", 00:16:30.141 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:30.141 "strip_size_kb": 0, 00:16:30.141 "state": "configuring", 00:16:30.141 "raid_level": "raid1", 00:16:30.141 "superblock": true, 00:16:30.141 "num_base_bdevs": 3, 00:16:30.141 "num_base_bdevs_discovered": 1, 00:16:30.141 "num_base_bdevs_operational": 3, 00:16:30.141 "base_bdevs_list": [ 00:16:30.141 { 00:16:30.141 "name": "pt1", 00:16:30.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.141 "is_configured": true, 00:16:30.141 "data_offset": 2048, 00:16:30.141 "data_size": 63488 00:16:30.141 }, 00:16:30.141 { 00:16:30.141 "name": null, 00:16:30.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.141 "is_configured": false, 00:16:30.141 "data_offset": 0, 00:16:30.141 "data_size": 63488 00:16:30.141 }, 00:16:30.141 { 00:16:30.141 "name": null, 00:16:30.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.141 "is_configured": false, 00:16:30.141 "data_offset": 2048, 00:16:30.141 "data_size": 63488 00:16:30.141 } 00:16:30.141 ] 00:16:30.141 }' 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.141 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 [2024-12-06 13:10:37.109065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.708 [2024-12-06 13:10:37.109247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.708 [2024-12-06 13:10:37.109294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:30.708 [2024-12-06 13:10:37.109326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.708 [2024-12-06 13:10:37.110338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.708 [2024-12-06 13:10:37.110389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.708 [2024-12-06 13:10:37.110583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.708 [2024-12-06 13:10:37.110671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.708 pt2 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 [2024-12-06 13:10:37.117013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.708 [2024-12-06 13:10:37.117097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.708 [2024-12-06 13:10:37.117126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:30.708 [2024-12-06 13:10:37.117150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.708 [2024-12-06 13:10:37.117761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.708 [2024-12-06 13:10:37.117847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.708 [2024-12-06 13:10:37.117954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:30.708 [2024-12-06 13:10:37.118008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:30.708 [2024-12-06 13:10:37.118254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:30.708 [2024-12-06 13:10:37.118308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.708 [2024-12-06 13:10:37.118747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:30.708 [2024-12-06 13:10:37.119052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:30.708 [2024-12-06 13:10:37.119073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:30.708 [2024-12-06 13:10:37.119317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.708 pt3 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.708 "name": "raid_bdev1", 00:16:30.708 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:30.708 "strip_size_kb": 0, 00:16:30.708 "state": "online", 00:16:30.708 "raid_level": "raid1", 00:16:30.708 "superblock": true, 00:16:30.708 "num_base_bdevs": 3, 00:16:30.708 "num_base_bdevs_discovered": 3, 00:16:30.708 "num_base_bdevs_operational": 3, 00:16:30.708 "base_bdevs_list": [ 00:16:30.708 { 00:16:30.708 "name": "pt1", 00:16:30.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.708 "is_configured": true, 00:16:30.708 "data_offset": 2048, 00:16:30.708 "data_size": 63488 00:16:30.708 }, 00:16:30.708 { 00:16:30.708 "name": "pt2", 00:16:30.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.708 "is_configured": true, 00:16:30.708 "data_offset": 2048, 00:16:30.708 "data_size": 63488 00:16:30.708 }, 00:16:30.708 { 00:16:30.708 "name": "pt3", 00:16:30.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.708 "is_configured": true, 00:16:30.708 "data_offset": 2048, 00:16:30.708 "data_size": 63488 00:16:30.708 } 00:16:30.708 ] 00:16:30.708 }' 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.708 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.274 [2024-12-06 13:10:37.693648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.274 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.274 "name": "raid_bdev1", 00:16:31.274 "aliases": [ 00:16:31.274 "ca9b89b9-fc3e-4821-a4aa-a058a2e04053" 00:16:31.274 ], 00:16:31.274 "product_name": "Raid Volume", 00:16:31.274 "block_size": 512, 00:16:31.274 "num_blocks": 63488, 00:16:31.274 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:31.274 "assigned_rate_limits": { 00:16:31.274 "rw_ios_per_sec": 0, 00:16:31.274 "rw_mbytes_per_sec": 0, 00:16:31.274 "r_mbytes_per_sec": 0, 00:16:31.274 "w_mbytes_per_sec": 0 00:16:31.274 }, 00:16:31.274 "claimed": false, 00:16:31.274 "zoned": false, 00:16:31.274 "supported_io_types": { 00:16:31.274 "read": true, 00:16:31.274 "write": true, 00:16:31.274 "unmap": false, 00:16:31.274 "flush": false, 00:16:31.275 "reset": true, 00:16:31.275 "nvme_admin": false, 00:16:31.275 "nvme_io": false, 00:16:31.275 "nvme_io_md": false, 00:16:31.275 "write_zeroes": true, 00:16:31.275 "zcopy": false, 00:16:31.275 "get_zone_info": false, 00:16:31.275 "zone_management": false, 00:16:31.275 "zone_append": false, 00:16:31.275 "compare": false, 00:16:31.275 "compare_and_write": false, 00:16:31.275 "abort": false, 00:16:31.275 "seek_hole": false, 00:16:31.275 "seek_data": false, 00:16:31.275 "copy": false, 00:16:31.275 "nvme_iov_md": false 00:16:31.275 }, 00:16:31.275 "memory_domains": [ 00:16:31.275 { 00:16:31.275 "dma_device_id": "system", 00:16:31.275 "dma_device_type": 1 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.275 "dma_device_type": 2 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "dma_device_id": "system", 00:16:31.275 "dma_device_type": 1 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.275 "dma_device_type": 2 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "dma_device_id": "system", 00:16:31.275 "dma_device_type": 1 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.275 "dma_device_type": 2 00:16:31.275 } 00:16:31.275 ], 00:16:31.275 "driver_specific": { 00:16:31.275 "raid": { 00:16:31.275 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:31.275 "strip_size_kb": 0, 00:16:31.275 "state": "online", 00:16:31.275 "raid_level": "raid1", 00:16:31.275 "superblock": true, 00:16:31.275 "num_base_bdevs": 3, 00:16:31.275 "num_base_bdevs_discovered": 3, 00:16:31.275 "num_base_bdevs_operational": 3, 00:16:31.275 "base_bdevs_list": [ 00:16:31.275 { 00:16:31.275 "name": "pt1", 00:16:31.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.275 "is_configured": true, 00:16:31.275 "data_offset": 2048, 00:16:31.275 "data_size": 63488 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "name": "pt2", 00:16:31.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.275 "is_configured": true, 00:16:31.275 "data_offset": 2048, 00:16:31.275 "data_size": 63488 00:16:31.275 }, 00:16:31.275 { 00:16:31.275 "name": "pt3", 00:16:31.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.275 "is_configured": true, 00:16:31.275 "data_offset": 2048, 00:16:31.275 "data_size": 63488 00:16:31.275 } 00:16:31.275 ] 00:16:31.275 } 00:16:31.275 } 00:16:31.275 }' 00:16:31.275 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.275 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.275 pt2 00:16:31.275 pt3' 00:16:31.275 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.532 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.533 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.533 [2024-12-06 13:10:38.025741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.533 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ca9b89b9-fc3e-4821-a4aa-a058a2e04053 '!=' ca9b89b9-fc3e-4821-a4aa-a058a2e04053 ']' 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.790 [2024-12-06 13:10:38.073354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.790 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.791 "name": "raid_bdev1", 00:16:31.791 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:31.791 "strip_size_kb": 0, 00:16:31.791 "state": "online", 00:16:31.791 "raid_level": "raid1", 00:16:31.791 "superblock": true, 00:16:31.791 "num_base_bdevs": 3, 00:16:31.791 "num_base_bdevs_discovered": 2, 00:16:31.791 "num_base_bdevs_operational": 2, 00:16:31.791 "base_bdevs_list": [ 00:16:31.791 { 00:16:31.791 "name": null, 00:16:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.791 "is_configured": false, 00:16:31.791 "data_offset": 0, 00:16:31.791 "data_size": 63488 00:16:31.791 }, 00:16:31.791 { 00:16:31.791 "name": "pt2", 00:16:31.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.791 "is_configured": true, 00:16:31.791 "data_offset": 2048, 00:16:31.791 "data_size": 63488 00:16:31.791 }, 00:16:31.791 { 00:16:31.791 "name": "pt3", 00:16:31.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.791 "is_configured": true, 00:16:31.791 "data_offset": 2048, 00:16:31.791 "data_size": 63488 00:16:31.791 } 00:16:31.791 ] 00:16:31.791 }' 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.791 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.355 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 [2024-12-06 13:10:38.625555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.356 [2024-12-06 13:10:38.625604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.356 [2024-12-06 13:10:38.625721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.356 [2024-12-06 13:10:38.625842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.356 [2024-12-06 13:10:38.625881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 [2024-12-06 13:10:38.705506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.356 [2024-12-06 13:10:38.705599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.356 [2024-12-06 13:10:38.705626] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:32.356 [2024-12-06 13:10:38.705643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.356 [2024-12-06 13:10:38.709022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.356 [2024-12-06 13:10:38.709085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.356 [2024-12-06 13:10:38.709230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.356 [2024-12-06 13:10:38.709300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.356 pt2 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.356 "name": "raid_bdev1", 00:16:32.356 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:32.356 "strip_size_kb": 0, 00:16:32.356 "state": "configuring", 00:16:32.356 "raid_level": "raid1", 00:16:32.356 "superblock": true, 00:16:32.356 "num_base_bdevs": 3, 00:16:32.356 "num_base_bdevs_discovered": 1, 00:16:32.356 "num_base_bdevs_operational": 2, 00:16:32.356 "base_bdevs_list": [ 00:16:32.356 { 00:16:32.356 "name": null, 00:16:32.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.356 "is_configured": false, 00:16:32.356 "data_offset": 2048, 00:16:32.356 "data_size": 63488 00:16:32.356 }, 00:16:32.356 { 00:16:32.356 "name": "pt2", 00:16:32.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.356 "is_configured": true, 00:16:32.356 "data_offset": 2048, 00:16:32.356 "data_size": 63488 00:16:32.356 }, 00:16:32.356 { 00:16:32.356 "name": null, 00:16:32.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.356 "is_configured": false, 00:16:32.356 "data_offset": 2048, 00:16:32.356 "data_size": 63488 00:16:32.356 } 00:16:32.356 ] 00:16:32.356 }' 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.356 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 [2024-12-06 13:10:39.241774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:32.924 [2024-12-06 13:10:39.241883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.924 [2024-12-06 13:10:39.241918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:32.924 [2024-12-06 13:10:39.241938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.924 [2024-12-06 13:10:39.242631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.924 [2024-12-06 13:10:39.242663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:32.924 [2024-12-06 13:10:39.242803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:32.924 [2024-12-06 13:10:39.242846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:32.924 [2024-12-06 13:10:39.243002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.924 [2024-12-06 13:10:39.243025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:32.924 [2024-12-06 13:10:39.243393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:32.924 [2024-12-06 13:10:39.243633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.924 [2024-12-06 13:10:39.243650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:32.924 [2024-12-06 13:10:39.243832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.924 pt3 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.924 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.924 "name": "raid_bdev1", 00:16:32.924 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:32.924 "strip_size_kb": 0, 00:16:32.924 "state": "online", 00:16:32.924 "raid_level": "raid1", 00:16:32.924 "superblock": true, 00:16:32.924 "num_base_bdevs": 3, 00:16:32.924 "num_base_bdevs_discovered": 2, 00:16:32.924 "num_base_bdevs_operational": 2, 00:16:32.924 "base_bdevs_list": [ 00:16:32.924 { 00:16:32.924 "name": null, 00:16:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.924 "is_configured": false, 00:16:32.924 "data_offset": 2048, 00:16:32.924 "data_size": 63488 00:16:32.924 }, 00:16:32.924 { 00:16:32.924 "name": "pt2", 00:16:32.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.924 "is_configured": true, 00:16:32.924 "data_offset": 2048, 00:16:32.924 "data_size": 63488 00:16:32.924 }, 00:16:32.924 { 00:16:32.924 "name": "pt3", 00:16:32.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.924 "is_configured": true, 00:16:32.924 "data_offset": 2048, 00:16:32.924 "data_size": 63488 00:16:32.925 } 00:16:32.925 ] 00:16:32.925 }' 00:16:32.925 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.925 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 [2024-12-06 13:10:39.789927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.491 [2024-12-06 13:10:39.789976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.491 [2024-12-06 13:10:39.790087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.491 [2024-12-06 13:10:39.790177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.491 [2024-12-06 13:10:39.790192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.491 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.491 [2024-12-06 13:10:39.857919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.491 [2024-12-06 13:10:39.858016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.491 [2024-12-06 13:10:39.858046] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:33.491 [2024-12-06 13:10:39.858061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.491 [2024-12-06 13:10:39.861309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.491 [2024-12-06 13:10:39.861350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.491 [2024-12-06 13:10:39.861516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.491 [2024-12-06 13:10:39.861582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.491 [2024-12-06 13:10:39.861748] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.491 [2024-12-06 13:10:39.861765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.491 [2024-12-06 13:10:39.861787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:33.491 [2024-12-06 13:10:39.861881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.491 pt1 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.492 "name": "raid_bdev1", 00:16:33.492 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:33.492 "strip_size_kb": 0, 00:16:33.492 "state": "configuring", 00:16:33.492 "raid_level": "raid1", 00:16:33.492 "superblock": true, 00:16:33.492 "num_base_bdevs": 3, 00:16:33.492 "num_base_bdevs_discovered": 1, 00:16:33.492 "num_base_bdevs_operational": 2, 00:16:33.492 "base_bdevs_list": [ 00:16:33.492 { 00:16:33.492 "name": null, 00:16:33.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.492 "is_configured": false, 00:16:33.492 "data_offset": 2048, 00:16:33.492 "data_size": 63488 00:16:33.492 }, 00:16:33.492 { 00:16:33.492 "name": "pt2", 00:16:33.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.492 "is_configured": true, 00:16:33.492 "data_offset": 2048, 00:16:33.492 "data_size": 63488 00:16:33.492 }, 00:16:33.492 { 00:16:33.492 "name": null, 00:16:33.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.492 "is_configured": false, 00:16:33.492 "data_offset": 2048, 00:16:33.492 "data_size": 63488 00:16:33.492 } 00:16:33.492 ] 00:16:33.492 }' 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.492 13:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.059 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.059 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:34.059 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.060 [2024-12-06 13:10:40.426246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:34.060 [2024-12-06 13:10:40.426400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.060 [2024-12-06 13:10:40.426440] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:34.060 [2024-12-06 13:10:40.426455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.060 [2024-12-06 13:10:40.427194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.060 [2024-12-06 13:10:40.427224] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:34.060 [2024-12-06 13:10:40.427353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:34.060 [2024-12-06 13:10:40.427419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.060 [2024-12-06 13:10:40.427609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:34.060 [2024-12-06 13:10:40.427627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.060 [2024-12-06 13:10:40.427960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:34.060 [2024-12-06 13:10:40.428164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:34.060 [2024-12-06 13:10:40.428188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:34.060 [2024-12-06 13:10:40.428371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.060 pt3 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.060 "name": "raid_bdev1", 00:16:34.060 "uuid": "ca9b89b9-fc3e-4821-a4aa-a058a2e04053", 00:16:34.060 "strip_size_kb": 0, 00:16:34.060 "state": "online", 00:16:34.060 "raid_level": "raid1", 00:16:34.060 "superblock": true, 00:16:34.060 "num_base_bdevs": 3, 00:16:34.060 "num_base_bdevs_discovered": 2, 00:16:34.060 "num_base_bdevs_operational": 2, 00:16:34.060 "base_bdevs_list": [ 00:16:34.060 { 00:16:34.060 "name": null, 00:16:34.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.060 "is_configured": false, 00:16:34.060 "data_offset": 2048, 00:16:34.060 "data_size": 63488 00:16:34.060 }, 00:16:34.060 { 00:16:34.060 "name": "pt2", 00:16:34.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.060 "is_configured": true, 00:16:34.060 "data_offset": 2048, 00:16:34.060 "data_size": 63488 00:16:34.060 }, 00:16:34.060 { 00:16:34.060 "name": "pt3", 00:16:34.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.060 "is_configured": true, 00:16:34.060 "data_offset": 2048, 00:16:34.060 "data_size": 63488 00:16:34.060 } 00:16:34.060 ] 00:16:34.060 }' 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.060 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.627 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.627 [2024-12-06 13:10:40.990821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ca9b89b9-fc3e-4821-a4aa-a058a2e04053 '!=' ca9b89b9-fc3e-4821-a4aa-a058a2e04053 ']' 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68976 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68976 ']' 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68976 00:16:34.627 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68976 00:16:34.628 killing process with pid 68976 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68976' 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68976 00:16:34.628 [2024-12-06 13:10:41.064178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.628 13:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68976 00:16:34.628 [2024-12-06 13:10:41.064314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.628 [2024-12-06 13:10:41.064401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.628 [2024-12-06 13:10:41.064422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:34.887 [2024-12-06 13:10:41.330920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.265 13:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:36.265 00:16:36.265 real 0m8.878s 00:16:36.265 user 0m14.452s 00:16:36.265 sys 0m1.339s 00:16:36.265 13:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.265 ************************************ 00:16:36.265 END TEST raid_superblock_test 00:16:36.265 ************************************ 00:16:36.265 13:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.265 13:10:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:16:36.265 13:10:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:36.265 13:10:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.265 13:10:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.265 ************************************ 00:16:36.265 START TEST raid_read_error_test 00:16:36.265 ************************************ 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EWJy4CCjdM 00:16:36.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69440 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69440 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69440 ']' 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.265 13:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.265 [2024-12-06 13:10:42.579177] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:36.265 [2024-12-06 13:10:42.579635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:16:36.265 [2024-12-06 13:10:42.759741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.524 [2024-12-06 13:10:42.896610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.782 [2024-12-06 13:10:43.107724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.782 [2024-12-06 13:10:43.107831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 BaseBdev1_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 true 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 [2024-12-06 13:10:43.715173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:37.349 [2024-12-06 13:10:43.715551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.349 [2024-12-06 13:10:43.715594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:37.349 [2024-12-06 13:10:43.715614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.349 [2024-12-06 13:10:43.719544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.349 [2024-12-06 13:10:43.719902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:37.349 BaseBdev1 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 BaseBdev2_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 true 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 [2024-12-06 13:10:43.780929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:37.349 [2024-12-06 13:10:43.781040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.349 [2024-12-06 13:10:43.781064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:37.349 [2024-12-06 13:10:43.781080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.349 [2024-12-06 13:10:43.783889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.349 [2024-12-06 13:10:43.783948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:37.349 BaseBdev2 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 BaseBdev3_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 true 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.349 [2024-12-06 13:10:43.855238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:37.349 [2024-12-06 13:10:43.855591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.349 [2024-12-06 13:10:43.855630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:37.349 [2024-12-06 13:10:43.855665] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.349 [2024-12-06 13:10:43.858686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.349 [2024-12-06 13:10:43.858916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:37.349 BaseBdev3 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.349 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.350 [2024-12-06 13:10:43.867554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.350 [2024-12-06 13:10:43.869939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.350 [2024-12-06 13:10:43.870029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.350 [2024-12-06 13:10:43.870301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:37.350 [2024-12-06 13:10:43.870319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:37.350 [2024-12-06 13:10:43.870686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:37.350 [2024-12-06 13:10:43.870952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:37.350 [2024-12-06 13:10:43.870976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:37.350 [2024-12-06 13:10:43.871145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.350 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.608 "name": "raid_bdev1", 00:16:37.608 "uuid": "79b34aa9-e5bd-48e6-9a77-a2be3828eb8b", 00:16:37.608 "strip_size_kb": 0, 00:16:37.608 "state": "online", 00:16:37.608 "raid_level": "raid1", 00:16:37.608 "superblock": true, 00:16:37.608 "num_base_bdevs": 3, 00:16:37.608 "num_base_bdevs_discovered": 3, 00:16:37.608 "num_base_bdevs_operational": 3, 00:16:37.608 "base_bdevs_list": [ 00:16:37.608 { 00:16:37.608 "name": "BaseBdev1", 00:16:37.608 "uuid": "7ae94370-93bb-5360-9553-8e3c5dbba412", 00:16:37.608 "is_configured": true, 00:16:37.608 "data_offset": 2048, 00:16:37.608 "data_size": 63488 00:16:37.608 }, 00:16:37.608 { 00:16:37.608 "name": "BaseBdev2", 00:16:37.608 "uuid": "9c0da5e4-5276-5f7d-b89b-36e512e67d43", 00:16:37.608 "is_configured": true, 00:16:37.608 "data_offset": 2048, 00:16:37.608 "data_size": 63488 00:16:37.608 }, 00:16:37.608 { 00:16:37.608 "name": "BaseBdev3", 00:16:37.608 "uuid": "8f626853-f4f6-5a55-b125-4f0ffed61eef", 00:16:37.608 "is_configured": true, 00:16:37.608 "data_offset": 2048, 00:16:37.608 "data_size": 63488 00:16:37.608 } 00:16:37.608 ] 00:16:37.608 }' 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.608 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.867 13:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:37.867 13:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:38.125 [2024-12-06 13:10:44.577268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.065 "name": "raid_bdev1", 00:16:39.065 "uuid": "79b34aa9-e5bd-48e6-9a77-a2be3828eb8b", 00:16:39.065 "strip_size_kb": 0, 00:16:39.065 "state": "online", 00:16:39.065 "raid_level": "raid1", 00:16:39.065 "superblock": true, 00:16:39.065 "num_base_bdevs": 3, 00:16:39.065 "num_base_bdevs_discovered": 3, 00:16:39.065 "num_base_bdevs_operational": 3, 00:16:39.065 "base_bdevs_list": [ 00:16:39.065 { 00:16:39.065 "name": "BaseBdev1", 00:16:39.065 "uuid": "7ae94370-93bb-5360-9553-8e3c5dbba412", 00:16:39.065 "is_configured": true, 00:16:39.065 "data_offset": 2048, 00:16:39.065 "data_size": 63488 00:16:39.065 }, 00:16:39.065 { 00:16:39.065 "name": "BaseBdev2", 00:16:39.065 "uuid": "9c0da5e4-5276-5f7d-b89b-36e512e67d43", 00:16:39.065 "is_configured": true, 00:16:39.065 "data_offset": 2048, 00:16:39.065 "data_size": 63488 00:16:39.065 }, 00:16:39.065 { 00:16:39.065 "name": "BaseBdev3", 00:16:39.065 "uuid": "8f626853-f4f6-5a55-b125-4f0ffed61eef", 00:16:39.065 "is_configured": true, 00:16:39.065 "data_offset": 2048, 00:16:39.065 "data_size": 63488 00:16:39.065 } 00:16:39.065 ] 00:16:39.065 }' 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.065 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.631 [2024-12-06 13:10:45.959809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.631 [2024-12-06 13:10:45.959856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.631 [2024-12-06 13:10:45.963313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.631 { 00:16:39.631 "results": [ 00:16:39.631 { 00:16:39.631 "job": "raid_bdev1", 00:16:39.631 "core_mask": "0x1", 00:16:39.631 "workload": "randrw", 00:16:39.631 "percentage": 50, 00:16:39.631 "status": "finished", 00:16:39.631 "queue_depth": 1, 00:16:39.631 "io_size": 131072, 00:16:39.631 "runtime": 1.380012, 00:16:39.631 "iops": 8241.957316313192, 00:16:39.631 "mibps": 1030.244664539149, 00:16:39.631 "io_failed": 0, 00:16:39.631 "io_timeout": 0, 00:16:39.631 "avg_latency_us": 116.97068337676039, 00:16:39.631 "min_latency_us": 39.56363636363636, 00:16:39.631 "max_latency_us": 1921.3963636363637 00:16:39.631 } 00:16:39.631 ], 00:16:39.631 "core_count": 1 00:16:39.631 } 00:16:39.631 [2024-12-06 13:10:45.963557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.631 [2024-12-06 13:10:45.963768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.631 [2024-12-06 13:10:45.963789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69440 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69440 ']' 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69440 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.631 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69440 00:16:39.631 killing process with pid 69440 00:16:39.631 13:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.631 13:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.631 13:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69440' 00:16:39.632 13:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69440 00:16:39.632 [2024-12-06 13:10:46.005287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.632 13:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69440 00:16:39.890 [2024-12-06 13:10:46.208926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EWJy4CCjdM 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:41.264 ************************************ 00:16:41.264 END TEST raid_read_error_test 00:16:41.264 ************************************ 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:41.264 00:16:41.264 real 0m5.019s 00:16:41.264 user 0m6.241s 00:16:41.264 sys 0m0.673s 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.264 13:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.264 13:10:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:16:41.264 13:10:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:41.264 13:10:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.264 13:10:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.264 ************************************ 00:16:41.264 START TEST raid_write_error_test 00:16:41.264 ************************************ 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k8punXEXiV 00:16:41.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69587 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69587 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69587 ']' 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.264 13:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.264 [2024-12-06 13:10:47.662839] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:41.264 [2024-12-06 13:10:47.663312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:16:41.523 [2024-12-06 13:10:47.829441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.523 [2024-12-06 13:10:47.968007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.781 [2024-12-06 13:10:48.181603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.781 [2024-12-06 13:10:48.181698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 BaseBdev1_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 true 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 [2024-12-06 13:10:48.704373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:42.349 [2024-12-06 13:10:48.704563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.349 [2024-12-06 13:10:48.704616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:42.349 [2024-12-06 13:10:48.704651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.349 [2024-12-06 13:10:48.708082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.349 [2024-12-06 13:10:48.708150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.349 BaseBdev1 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 BaseBdev2_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 true 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 [2024-12-06 13:10:48.775851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:42.349 [2024-12-06 13:10:48.775991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.349 [2024-12-06 13:10:48.776018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:42.349 [2024-12-06 13:10:48.776051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.349 [2024-12-06 13:10:48.779379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.349 [2024-12-06 13:10:48.779678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.349 BaseBdev2 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 BaseBdev3_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 true 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 [2024-12-06 13:10:48.850080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:42.349 [2024-12-06 13:10:48.850237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.349 [2024-12-06 13:10:48.850321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:42.349 [2024-12-06 13:10:48.850343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.349 [2024-12-06 13:10:48.854197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.349 [2024-12-06 13:10:48.854287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:42.349 BaseBdev3 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 [2024-12-06 13:10:48.858654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.349 [2024-12-06 13:10:48.861923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.349 [2024-12-06 13:10:48.862236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.349 [2024-12-06 13:10:48.862794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:42.349 [2024-12-06 13:10:48.863046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:42.349 [2024-12-06 13:10:48.863536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:42.349 [2024-12-06 13:10:48.864022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:42.349 [2024-12-06 13:10:48.864159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:42.349 [2024-12-06 13:10:48.864489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.350 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.612 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.612 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.612 "name": "raid_bdev1", 00:16:42.612 "uuid": "5c562d27-f21b-4fad-adad-6c2d3708950e", 00:16:42.612 "strip_size_kb": 0, 00:16:42.612 "state": "online", 00:16:42.612 "raid_level": "raid1", 00:16:42.612 "superblock": true, 00:16:42.612 "num_base_bdevs": 3, 00:16:42.612 "num_base_bdevs_discovered": 3, 00:16:42.612 "num_base_bdevs_operational": 3, 00:16:42.612 "base_bdevs_list": [ 00:16:42.612 { 00:16:42.612 "name": "BaseBdev1", 00:16:42.612 "uuid": "5e7604ac-1331-55c1-9b5f-09199d30be7c", 00:16:42.612 "is_configured": true, 00:16:42.612 "data_offset": 2048, 00:16:42.612 "data_size": 63488 00:16:42.612 }, 00:16:42.612 { 00:16:42.612 "name": "BaseBdev2", 00:16:42.612 "uuid": "222f1ad9-0a58-50ba-b57f-309da69c4db3", 00:16:42.612 "is_configured": true, 00:16:42.612 "data_offset": 2048, 00:16:42.612 "data_size": 63488 00:16:42.612 }, 00:16:42.612 { 00:16:42.612 "name": "BaseBdev3", 00:16:42.612 "uuid": "d5a33779-2772-5438-a66e-90e68f645d59", 00:16:42.612 "is_configured": true, 00:16:42.612 "data_offset": 2048, 00:16:42.612 "data_size": 63488 00:16:42.612 } 00:16:42.612 ] 00:16:42.612 }' 00:16:42.612 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.612 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.888 13:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:42.888 13:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:43.146 [2024-12-06 13:10:49.592924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.082 [2024-12-06 13:10:50.416576] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:44.082 [2024-12-06 13:10:50.416667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.082 [2024-12-06 13:10:50.416964] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.082 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.083 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.083 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.083 "name": "raid_bdev1", 00:16:44.083 "uuid": "5c562d27-f21b-4fad-adad-6c2d3708950e", 00:16:44.083 "strip_size_kb": 0, 00:16:44.083 "state": "online", 00:16:44.083 "raid_level": "raid1", 00:16:44.083 "superblock": true, 00:16:44.083 "num_base_bdevs": 3, 00:16:44.083 "num_base_bdevs_discovered": 2, 00:16:44.083 "num_base_bdevs_operational": 2, 00:16:44.083 "base_bdevs_list": [ 00:16:44.083 { 00:16:44.083 "name": null, 00:16:44.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.083 "is_configured": false, 00:16:44.083 "data_offset": 0, 00:16:44.083 "data_size": 63488 00:16:44.083 }, 00:16:44.083 { 00:16:44.083 "name": "BaseBdev2", 00:16:44.083 "uuid": "222f1ad9-0a58-50ba-b57f-309da69c4db3", 00:16:44.083 "is_configured": true, 00:16:44.083 "data_offset": 2048, 00:16:44.083 "data_size": 63488 00:16:44.083 }, 00:16:44.083 { 00:16:44.083 "name": "BaseBdev3", 00:16:44.083 "uuid": "d5a33779-2772-5438-a66e-90e68f645d59", 00:16:44.083 "is_configured": true, 00:16:44.083 "data_offset": 2048, 00:16:44.083 "data_size": 63488 00:16:44.083 } 00:16:44.083 ] 00:16:44.083 }' 00:16:44.083 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.083 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.673 [2024-12-06 13:10:50.943843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.673 [2024-12-06 13:10:50.944073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.673 [2024-12-06 13:10:50.947647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.673 [2024-12-06 13:10:50.947904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.673 [2024-12-06 13:10:50.948128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.673 [2024-12-06 13:10:50.948353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:16:44.673 "results": [ 00:16:44.673 { 00:16:44.673 "job": "raid_bdev1", 00:16:44.673 "core_mask": "0x1", 00:16:44.673 "workload": "randrw", 00:16:44.673 "percentage": 50, 00:16:44.673 "status": "finished", 00:16:44.673 "queue_depth": 1, 00:16:44.673 "io_size": 131072, 00:16:44.673 "runtime": 1.348246, 00:16:44.673 "iops": 8649.015090717867, 00:16:44.673 "mibps": 1081.1268863397333, 00:16:44.673 "io_failed": 0, 00:16:44.673 "io_timeout": 0, 00:16:44.673 "avg_latency_us": 110.96733135315074, 00:16:44.673 "min_latency_us": 41.192727272727275, 00:16:44.673 "max_latency_us": 2025.658181818182 00:16:44.673 } 00:16:44.673 ], 00:16:44.673 "core_count": 1 00:16:44.673 } 00:16:44.673 te offline 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69587 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69587 ']' 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69587 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69587 00:16:44.673 killing process with pid 69587 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69587' 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69587 00:16:44.673 [2024-12-06 13:10:50.991121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.673 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69587 00:16:44.932 [2024-12-06 13:10:51.202256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k8punXEXiV 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:46.306 00:16:46.306 real 0m4.852s 00:16:46.306 user 0m5.977s 00:16:46.306 sys 0m0.655s 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.306 ************************************ 00:16:46.306 END TEST raid_write_error_test 00:16:46.306 ************************************ 00:16:46.306 13:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 13:10:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:16:46.306 13:10:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:46.306 13:10:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:46.306 13:10:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:46.306 13:10:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.306 13:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 ************************************ 00:16:46.306 START TEST raid_state_function_test 00:16:46.306 ************************************ 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69730 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:46.306 Process raid pid: 69730 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69730' 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69730 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69730 ']' 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.306 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 [2024-12-06 13:10:52.567977] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:46.306 [2024-12-06 13:10:52.568436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.306 [2024-12-06 13:10:52.751948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.621 [2024-12-06 13:10:52.931316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.879 [2024-12-06 13:10:53.165630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.879 [2024-12-06 13:10:53.165991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 [2024-12-06 13:10:53.579189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.137 [2024-12-06 13:10:53.579412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.137 [2024-12-06 13:10:53.579441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.137 [2024-12-06 13:10:53.579475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.137 [2024-12-06 13:10:53.579487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.137 [2024-12-06 13:10:53.579502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.137 [2024-12-06 13:10:53.579512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.137 [2024-12-06 13:10:53.579528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.137 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.138 "name": "Existed_Raid", 00:16:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.138 "strip_size_kb": 64, 00:16:47.138 "state": "configuring", 00:16:47.138 "raid_level": "raid0", 00:16:47.138 "superblock": false, 00:16:47.138 "num_base_bdevs": 4, 00:16:47.138 "num_base_bdevs_discovered": 0, 00:16:47.138 "num_base_bdevs_operational": 4, 00:16:47.138 "base_bdevs_list": [ 00:16:47.138 { 00:16:47.138 "name": "BaseBdev1", 00:16:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.138 "is_configured": false, 00:16:47.138 "data_offset": 0, 00:16:47.138 "data_size": 0 00:16:47.138 }, 00:16:47.138 { 00:16:47.138 "name": "BaseBdev2", 00:16:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.138 "is_configured": false, 00:16:47.138 "data_offset": 0, 00:16:47.138 "data_size": 0 00:16:47.138 }, 00:16:47.138 { 00:16:47.138 "name": "BaseBdev3", 00:16:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.138 "is_configured": false, 00:16:47.138 "data_offset": 0, 00:16:47.138 "data_size": 0 00:16:47.138 }, 00:16:47.138 { 00:16:47.138 "name": "BaseBdev4", 00:16:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.138 "is_configured": false, 00:16:47.138 "data_offset": 0, 00:16:47.138 "data_size": 0 00:16:47.138 } 00:16:47.138 ] 00:16:47.138 }' 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.138 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 [2024-12-06 13:10:54.123284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.702 [2024-12-06 13:10:54.123335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 [2024-12-06 13:10:54.131239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.702 [2024-12-06 13:10:54.131310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.702 [2024-12-06 13:10:54.131325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.702 [2024-12-06 13:10:54.131341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.702 [2024-12-06 13:10:54.131350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.702 [2024-12-06 13:10:54.131365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.702 [2024-12-06 13:10:54.131375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.702 [2024-12-06 13:10:54.131389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 [2024-12-06 13:10:54.180509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.702 BaseBdev1 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.702 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 [ 00:16:47.702 { 00:16:47.702 "name": "BaseBdev1", 00:16:47.702 "aliases": [ 00:16:47.702 "5f666e0b-1450-480a-82df-322c6ebdb2e4" 00:16:47.702 ], 00:16:47.702 "product_name": "Malloc disk", 00:16:47.702 "block_size": 512, 00:16:47.702 "num_blocks": 65536, 00:16:47.702 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:47.702 "assigned_rate_limits": { 00:16:47.702 "rw_ios_per_sec": 0, 00:16:47.702 "rw_mbytes_per_sec": 0, 00:16:47.702 "r_mbytes_per_sec": 0, 00:16:47.702 "w_mbytes_per_sec": 0 00:16:47.702 }, 00:16:47.702 "claimed": true, 00:16:47.702 "claim_type": "exclusive_write", 00:16:47.703 "zoned": false, 00:16:47.703 "supported_io_types": { 00:16:47.703 "read": true, 00:16:47.703 "write": true, 00:16:47.703 "unmap": true, 00:16:47.703 "flush": true, 00:16:47.703 "reset": true, 00:16:47.703 "nvme_admin": false, 00:16:47.703 "nvme_io": false, 00:16:47.703 "nvme_io_md": false, 00:16:47.703 "write_zeroes": true, 00:16:47.703 "zcopy": true, 00:16:47.703 "get_zone_info": false, 00:16:47.703 "zone_management": false, 00:16:47.703 "zone_append": false, 00:16:47.703 "compare": false, 00:16:47.703 "compare_and_write": false, 00:16:47.703 "abort": true, 00:16:47.703 "seek_hole": false, 00:16:47.703 "seek_data": false, 00:16:47.703 "copy": true, 00:16:47.703 "nvme_iov_md": false 00:16:47.703 }, 00:16:47.703 "memory_domains": [ 00:16:47.703 { 00:16:47.703 "dma_device_id": "system", 00:16:47.703 "dma_device_type": 1 00:16:47.703 }, 00:16:47.703 { 00:16:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.703 "dma_device_type": 2 00:16:47.703 } 00:16:47.703 ], 00:16:47.703 "driver_specific": {} 00:16:47.703 } 00:16:47.703 ] 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.703 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.960 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.960 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.960 "name": "Existed_Raid", 00:16:47.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.960 "strip_size_kb": 64, 00:16:47.960 "state": "configuring", 00:16:47.960 "raid_level": "raid0", 00:16:47.960 "superblock": false, 00:16:47.960 "num_base_bdevs": 4, 00:16:47.960 "num_base_bdevs_discovered": 1, 00:16:47.960 "num_base_bdevs_operational": 4, 00:16:47.960 "base_bdevs_list": [ 00:16:47.960 { 00:16:47.960 "name": "BaseBdev1", 00:16:47.960 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:47.960 "is_configured": true, 00:16:47.960 "data_offset": 0, 00:16:47.960 "data_size": 65536 00:16:47.960 }, 00:16:47.960 { 00:16:47.960 "name": "BaseBdev2", 00:16:47.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.960 "is_configured": false, 00:16:47.960 "data_offset": 0, 00:16:47.960 "data_size": 0 00:16:47.960 }, 00:16:47.960 { 00:16:47.960 "name": "BaseBdev3", 00:16:47.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.960 "is_configured": false, 00:16:47.960 "data_offset": 0, 00:16:47.960 "data_size": 0 00:16:47.960 }, 00:16:47.960 { 00:16:47.960 "name": "BaseBdev4", 00:16:47.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.960 "is_configured": false, 00:16:47.960 "data_offset": 0, 00:16:47.960 "data_size": 0 00:16:47.960 } 00:16:47.960 ] 00:16:47.960 }' 00:16:47.960 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.960 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 [2024-12-06 13:10:54.796822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.526 [2024-12-06 13:10:54.796924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 [2024-12-06 13:10:54.804867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.526 [2024-12-06 13:10:54.807651] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.526 [2024-12-06 13:10:54.807709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.526 [2024-12-06 13:10:54.807726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.526 [2024-12-06 13:10:54.807745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.526 [2024-12-06 13:10:54.807756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.526 [2024-12-06 13:10:54.807770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.526 "name": "Existed_Raid", 00:16:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.526 "strip_size_kb": 64, 00:16:48.526 "state": "configuring", 00:16:48.526 "raid_level": "raid0", 00:16:48.526 "superblock": false, 00:16:48.526 "num_base_bdevs": 4, 00:16:48.526 "num_base_bdevs_discovered": 1, 00:16:48.526 "num_base_bdevs_operational": 4, 00:16:48.526 "base_bdevs_list": [ 00:16:48.526 { 00:16:48.526 "name": "BaseBdev1", 00:16:48.526 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:48.526 "is_configured": true, 00:16:48.526 "data_offset": 0, 00:16:48.526 "data_size": 65536 00:16:48.526 }, 00:16:48.526 { 00:16:48.526 "name": "BaseBdev2", 00:16:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.526 "is_configured": false, 00:16:48.526 "data_offset": 0, 00:16:48.526 "data_size": 0 00:16:48.526 }, 00:16:48.526 { 00:16:48.526 "name": "BaseBdev3", 00:16:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.526 "is_configured": false, 00:16:48.526 "data_offset": 0, 00:16:48.526 "data_size": 0 00:16:48.526 }, 00:16:48.526 { 00:16:48.526 "name": "BaseBdev4", 00:16:48.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.526 "is_configured": false, 00:16:48.526 "data_offset": 0, 00:16:48.526 "data_size": 0 00:16:48.526 } 00:16:48.526 ] 00:16:48.526 }' 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.526 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.093 [2024-12-06 13:10:55.381813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.093 BaseBdev2 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.093 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.093 [ 00:16:49.093 { 00:16:49.093 "name": "BaseBdev2", 00:16:49.093 "aliases": [ 00:16:49.093 "f7a42efa-c600-46f1-b504-5f6ee11671fc" 00:16:49.093 ], 00:16:49.093 "product_name": "Malloc disk", 00:16:49.093 "block_size": 512, 00:16:49.093 "num_blocks": 65536, 00:16:49.093 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:49.093 "assigned_rate_limits": { 00:16:49.093 "rw_ios_per_sec": 0, 00:16:49.093 "rw_mbytes_per_sec": 0, 00:16:49.093 "r_mbytes_per_sec": 0, 00:16:49.093 "w_mbytes_per_sec": 0 00:16:49.093 }, 00:16:49.093 "claimed": true, 00:16:49.093 "claim_type": "exclusive_write", 00:16:49.093 "zoned": false, 00:16:49.093 "supported_io_types": { 00:16:49.093 "read": true, 00:16:49.093 "write": true, 00:16:49.093 "unmap": true, 00:16:49.093 "flush": true, 00:16:49.094 "reset": true, 00:16:49.094 "nvme_admin": false, 00:16:49.094 "nvme_io": false, 00:16:49.094 "nvme_io_md": false, 00:16:49.094 "write_zeroes": true, 00:16:49.094 "zcopy": true, 00:16:49.094 "get_zone_info": false, 00:16:49.094 "zone_management": false, 00:16:49.094 "zone_append": false, 00:16:49.094 "compare": false, 00:16:49.094 "compare_and_write": false, 00:16:49.094 "abort": true, 00:16:49.094 "seek_hole": false, 00:16:49.094 "seek_data": false, 00:16:49.094 "copy": true, 00:16:49.094 "nvme_iov_md": false 00:16:49.094 }, 00:16:49.094 "memory_domains": [ 00:16:49.094 { 00:16:49.094 "dma_device_id": "system", 00:16:49.094 "dma_device_type": 1 00:16:49.094 }, 00:16:49.094 { 00:16:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.094 "dma_device_type": 2 00:16:49.094 } 00:16:49.094 ], 00:16:49.094 "driver_specific": {} 00:16:49.094 } 00:16:49.094 ] 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.094 "name": "Existed_Raid", 00:16:49.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.094 "strip_size_kb": 64, 00:16:49.094 "state": "configuring", 00:16:49.094 "raid_level": "raid0", 00:16:49.094 "superblock": false, 00:16:49.094 "num_base_bdevs": 4, 00:16:49.094 "num_base_bdevs_discovered": 2, 00:16:49.094 "num_base_bdevs_operational": 4, 00:16:49.094 "base_bdevs_list": [ 00:16:49.094 { 00:16:49.094 "name": "BaseBdev1", 00:16:49.094 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:49.094 "is_configured": true, 00:16:49.094 "data_offset": 0, 00:16:49.094 "data_size": 65536 00:16:49.094 }, 00:16:49.094 { 00:16:49.094 "name": "BaseBdev2", 00:16:49.094 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:49.094 "is_configured": true, 00:16:49.094 "data_offset": 0, 00:16:49.094 "data_size": 65536 00:16:49.094 }, 00:16:49.094 { 00:16:49.094 "name": "BaseBdev3", 00:16:49.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.094 "is_configured": false, 00:16:49.094 "data_offset": 0, 00:16:49.094 "data_size": 0 00:16:49.094 }, 00:16:49.094 { 00:16:49.094 "name": "BaseBdev4", 00:16:49.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.094 "is_configured": false, 00:16:49.094 "data_offset": 0, 00:16:49.094 "data_size": 0 00:16:49.094 } 00:16:49.094 ] 00:16:49.094 }' 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.094 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 [2024-12-06 13:10:55.957562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.720 BaseBdev3 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 [ 00:16:49.720 { 00:16:49.720 "name": "BaseBdev3", 00:16:49.720 "aliases": [ 00:16:49.720 "8c937450-d3d1-40a7-a454-23a0fe5a7b3d" 00:16:49.720 ], 00:16:49.720 "product_name": "Malloc disk", 00:16:49.720 "block_size": 512, 00:16:49.720 "num_blocks": 65536, 00:16:49.720 "uuid": "8c937450-d3d1-40a7-a454-23a0fe5a7b3d", 00:16:49.720 "assigned_rate_limits": { 00:16:49.720 "rw_ios_per_sec": 0, 00:16:49.720 "rw_mbytes_per_sec": 0, 00:16:49.720 "r_mbytes_per_sec": 0, 00:16:49.720 "w_mbytes_per_sec": 0 00:16:49.720 }, 00:16:49.720 "claimed": true, 00:16:49.720 "claim_type": "exclusive_write", 00:16:49.720 "zoned": false, 00:16:49.720 "supported_io_types": { 00:16:49.720 "read": true, 00:16:49.720 "write": true, 00:16:49.720 "unmap": true, 00:16:49.720 "flush": true, 00:16:49.720 "reset": true, 00:16:49.720 "nvme_admin": false, 00:16:49.720 "nvme_io": false, 00:16:49.720 "nvme_io_md": false, 00:16:49.720 "write_zeroes": true, 00:16:49.720 "zcopy": true, 00:16:49.720 "get_zone_info": false, 00:16:49.720 "zone_management": false, 00:16:49.720 "zone_append": false, 00:16:49.720 "compare": false, 00:16:49.720 "compare_and_write": false, 00:16:49.720 "abort": true, 00:16:49.720 "seek_hole": false, 00:16:49.720 "seek_data": false, 00:16:49.720 "copy": true, 00:16:49.720 "nvme_iov_md": false 00:16:49.720 }, 00:16:49.720 "memory_domains": [ 00:16:49.720 { 00:16:49.720 "dma_device_id": "system", 00:16:49.720 "dma_device_type": 1 00:16:49.720 }, 00:16:49.720 { 00:16:49.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.720 "dma_device_type": 2 00:16:49.720 } 00:16:49.720 ], 00:16:49.720 "driver_specific": {} 00:16:49.720 } 00:16:49.720 ] 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.720 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.720 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.720 "name": "Existed_Raid", 00:16:49.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.720 "strip_size_kb": 64, 00:16:49.720 "state": "configuring", 00:16:49.720 "raid_level": "raid0", 00:16:49.720 "superblock": false, 00:16:49.720 "num_base_bdevs": 4, 00:16:49.720 "num_base_bdevs_discovered": 3, 00:16:49.720 "num_base_bdevs_operational": 4, 00:16:49.720 "base_bdevs_list": [ 00:16:49.720 { 00:16:49.720 "name": "BaseBdev1", 00:16:49.720 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:49.720 "is_configured": true, 00:16:49.720 "data_offset": 0, 00:16:49.720 "data_size": 65536 00:16:49.720 }, 00:16:49.720 { 00:16:49.720 "name": "BaseBdev2", 00:16:49.720 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:49.720 "is_configured": true, 00:16:49.720 "data_offset": 0, 00:16:49.720 "data_size": 65536 00:16:49.720 }, 00:16:49.720 { 00:16:49.720 "name": "BaseBdev3", 00:16:49.720 "uuid": "8c937450-d3d1-40a7-a454-23a0fe5a7b3d", 00:16:49.720 "is_configured": true, 00:16:49.720 "data_offset": 0, 00:16:49.720 "data_size": 65536 00:16:49.720 }, 00:16:49.720 { 00:16:49.720 "name": "BaseBdev4", 00:16:49.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.720 "is_configured": false, 00:16:49.720 "data_offset": 0, 00:16:49.720 "data_size": 0 00:16:49.720 } 00:16:49.720 ] 00:16:49.720 }' 00:16:49.720 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.720 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.287 [2024-12-06 13:10:56.571571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.287 [2024-12-06 13:10:56.571654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:50.287 [2024-12-06 13:10:56.571668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:50.287 [2024-12-06 13:10:56.572044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.287 [2024-12-06 13:10:56.572259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:50.287 [2024-12-06 13:10:56.572279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:50.287 [2024-12-06 13:10:56.572669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.287 BaseBdev4 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.287 [ 00:16:50.287 { 00:16:50.287 "name": "BaseBdev4", 00:16:50.287 "aliases": [ 00:16:50.287 "60e14efa-cee1-4739-9432-895582a0e6f5" 00:16:50.287 ], 00:16:50.287 "product_name": "Malloc disk", 00:16:50.287 "block_size": 512, 00:16:50.287 "num_blocks": 65536, 00:16:50.287 "uuid": "60e14efa-cee1-4739-9432-895582a0e6f5", 00:16:50.287 "assigned_rate_limits": { 00:16:50.287 "rw_ios_per_sec": 0, 00:16:50.287 "rw_mbytes_per_sec": 0, 00:16:50.287 "r_mbytes_per_sec": 0, 00:16:50.287 "w_mbytes_per_sec": 0 00:16:50.287 }, 00:16:50.287 "claimed": true, 00:16:50.287 "claim_type": "exclusive_write", 00:16:50.287 "zoned": false, 00:16:50.287 "supported_io_types": { 00:16:50.287 "read": true, 00:16:50.287 "write": true, 00:16:50.287 "unmap": true, 00:16:50.287 "flush": true, 00:16:50.287 "reset": true, 00:16:50.287 "nvme_admin": false, 00:16:50.287 "nvme_io": false, 00:16:50.287 "nvme_io_md": false, 00:16:50.287 "write_zeroes": true, 00:16:50.287 "zcopy": true, 00:16:50.287 "get_zone_info": false, 00:16:50.287 "zone_management": false, 00:16:50.287 "zone_append": false, 00:16:50.287 "compare": false, 00:16:50.287 "compare_and_write": false, 00:16:50.287 "abort": true, 00:16:50.287 "seek_hole": false, 00:16:50.287 "seek_data": false, 00:16:50.287 "copy": true, 00:16:50.287 "nvme_iov_md": false 00:16:50.287 }, 00:16:50.287 "memory_domains": [ 00:16:50.287 { 00:16:50.287 "dma_device_id": "system", 00:16:50.287 "dma_device_type": 1 00:16:50.287 }, 00:16:50.287 { 00:16:50.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.287 "dma_device_type": 2 00:16:50.287 } 00:16:50.287 ], 00:16:50.287 "driver_specific": {} 00:16:50.287 } 00:16:50.287 ] 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:50.287 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.288 "name": "Existed_Raid", 00:16:50.288 "uuid": "74318600-d2af-493b-a219-ed98864128f4", 00:16:50.288 "strip_size_kb": 64, 00:16:50.288 "state": "online", 00:16:50.288 "raid_level": "raid0", 00:16:50.288 "superblock": false, 00:16:50.288 "num_base_bdevs": 4, 00:16:50.288 "num_base_bdevs_discovered": 4, 00:16:50.288 "num_base_bdevs_operational": 4, 00:16:50.288 "base_bdevs_list": [ 00:16:50.288 { 00:16:50.288 "name": "BaseBdev1", 00:16:50.288 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:50.288 "is_configured": true, 00:16:50.288 "data_offset": 0, 00:16:50.288 "data_size": 65536 00:16:50.288 }, 00:16:50.288 { 00:16:50.288 "name": "BaseBdev2", 00:16:50.288 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:50.288 "is_configured": true, 00:16:50.288 "data_offset": 0, 00:16:50.288 "data_size": 65536 00:16:50.288 }, 00:16:50.288 { 00:16:50.288 "name": "BaseBdev3", 00:16:50.288 "uuid": "8c937450-d3d1-40a7-a454-23a0fe5a7b3d", 00:16:50.288 "is_configured": true, 00:16:50.288 "data_offset": 0, 00:16:50.288 "data_size": 65536 00:16:50.288 }, 00:16:50.288 { 00:16:50.288 "name": "BaseBdev4", 00:16:50.288 "uuid": "60e14efa-cee1-4739-9432-895582a0e6f5", 00:16:50.288 "is_configured": true, 00:16:50.288 "data_offset": 0, 00:16:50.288 "data_size": 65536 00:16:50.288 } 00:16:50.288 ] 00:16:50.288 }' 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.288 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.855 [2024-12-06 13:10:57.157287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.855 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.855 "name": "Existed_Raid", 00:16:50.855 "aliases": [ 00:16:50.855 "74318600-d2af-493b-a219-ed98864128f4" 00:16:50.855 ], 00:16:50.855 "product_name": "Raid Volume", 00:16:50.855 "block_size": 512, 00:16:50.855 "num_blocks": 262144, 00:16:50.855 "uuid": "74318600-d2af-493b-a219-ed98864128f4", 00:16:50.855 "assigned_rate_limits": { 00:16:50.855 "rw_ios_per_sec": 0, 00:16:50.855 "rw_mbytes_per_sec": 0, 00:16:50.855 "r_mbytes_per_sec": 0, 00:16:50.855 "w_mbytes_per_sec": 0 00:16:50.855 }, 00:16:50.855 "claimed": false, 00:16:50.855 "zoned": false, 00:16:50.855 "supported_io_types": { 00:16:50.855 "read": true, 00:16:50.855 "write": true, 00:16:50.855 "unmap": true, 00:16:50.855 "flush": true, 00:16:50.855 "reset": true, 00:16:50.855 "nvme_admin": false, 00:16:50.855 "nvme_io": false, 00:16:50.855 "nvme_io_md": false, 00:16:50.855 "write_zeroes": true, 00:16:50.855 "zcopy": false, 00:16:50.855 "get_zone_info": false, 00:16:50.855 "zone_management": false, 00:16:50.855 "zone_append": false, 00:16:50.855 "compare": false, 00:16:50.855 "compare_and_write": false, 00:16:50.855 "abort": false, 00:16:50.855 "seek_hole": false, 00:16:50.855 "seek_data": false, 00:16:50.855 "copy": false, 00:16:50.855 "nvme_iov_md": false 00:16:50.855 }, 00:16:50.855 "memory_domains": [ 00:16:50.855 { 00:16:50.855 "dma_device_id": "system", 00:16:50.855 "dma_device_type": 1 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.855 "dma_device_type": 2 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "system", 00:16:50.855 "dma_device_type": 1 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.855 "dma_device_type": 2 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "system", 00:16:50.855 "dma_device_type": 1 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.855 "dma_device_type": 2 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "system", 00:16:50.855 "dma_device_type": 1 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.855 "dma_device_type": 2 00:16:50.855 } 00:16:50.855 ], 00:16:50.855 "driver_specific": { 00:16:50.855 "raid": { 00:16:50.855 "uuid": "74318600-d2af-493b-a219-ed98864128f4", 00:16:50.855 "strip_size_kb": 64, 00:16:50.855 "state": "online", 00:16:50.855 "raid_level": "raid0", 00:16:50.855 "superblock": false, 00:16:50.855 "num_base_bdevs": 4, 00:16:50.855 "num_base_bdevs_discovered": 4, 00:16:50.855 "num_base_bdevs_operational": 4, 00:16:50.855 "base_bdevs_list": [ 00:16:50.855 { 00:16:50.855 "name": "BaseBdev1", 00:16:50.855 "uuid": "5f666e0b-1450-480a-82df-322c6ebdb2e4", 00:16:50.855 "is_configured": true, 00:16:50.855 "data_offset": 0, 00:16:50.855 "data_size": 65536 00:16:50.855 }, 00:16:50.855 { 00:16:50.855 "name": "BaseBdev2", 00:16:50.855 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:50.855 "is_configured": true, 00:16:50.855 "data_offset": 0, 00:16:50.856 "data_size": 65536 00:16:50.856 }, 00:16:50.856 { 00:16:50.856 "name": "BaseBdev3", 00:16:50.856 "uuid": "8c937450-d3d1-40a7-a454-23a0fe5a7b3d", 00:16:50.856 "is_configured": true, 00:16:50.856 "data_offset": 0, 00:16:50.856 "data_size": 65536 00:16:50.856 }, 00:16:50.856 { 00:16:50.856 "name": "BaseBdev4", 00:16:50.856 "uuid": "60e14efa-cee1-4739-9432-895582a0e6f5", 00:16:50.856 "is_configured": true, 00:16:50.856 "data_offset": 0, 00:16:50.856 "data_size": 65536 00:16:50.856 } 00:16:50.856 ] 00:16:50.856 } 00:16:50.856 } 00:16:50.856 }' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:50.856 BaseBdev2 00:16:50.856 BaseBdev3 00:16:50.856 BaseBdev4' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.856 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 [2024-12-06 13:10:57.521061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.114 [2024-12-06 13:10:57.521104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.114 [2024-12-06 13:10:57.521176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.114 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.373 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.373 "name": "Existed_Raid", 00:16:51.373 "uuid": "74318600-d2af-493b-a219-ed98864128f4", 00:16:51.373 "strip_size_kb": 64, 00:16:51.373 "state": "offline", 00:16:51.373 "raid_level": "raid0", 00:16:51.373 "superblock": false, 00:16:51.373 "num_base_bdevs": 4, 00:16:51.373 "num_base_bdevs_discovered": 3, 00:16:51.373 "num_base_bdevs_operational": 3, 00:16:51.373 "base_bdevs_list": [ 00:16:51.373 { 00:16:51.373 "name": null, 00:16:51.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.373 "is_configured": false, 00:16:51.373 "data_offset": 0, 00:16:51.373 "data_size": 65536 00:16:51.373 }, 00:16:51.373 { 00:16:51.373 "name": "BaseBdev2", 00:16:51.373 "uuid": "f7a42efa-c600-46f1-b504-5f6ee11671fc", 00:16:51.373 "is_configured": true, 00:16:51.373 "data_offset": 0, 00:16:51.373 "data_size": 65536 00:16:51.373 }, 00:16:51.373 { 00:16:51.373 "name": "BaseBdev3", 00:16:51.373 "uuid": "8c937450-d3d1-40a7-a454-23a0fe5a7b3d", 00:16:51.373 "is_configured": true, 00:16:51.373 "data_offset": 0, 00:16:51.373 "data_size": 65536 00:16:51.373 }, 00:16:51.373 { 00:16:51.373 "name": "BaseBdev4", 00:16:51.373 "uuid": "60e14efa-cee1-4739-9432-895582a0e6f5", 00:16:51.373 "is_configured": true, 00:16:51.373 "data_offset": 0, 00:16:51.373 "data_size": 65536 00:16:51.373 } 00:16:51.373 ] 00:16:51.373 }' 00:16:51.373 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.373 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 [2024-12-06 13:10:58.256156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.940 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 [2024-12-06 13:10:58.429293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.199 [2024-12-06 13:10:58.582809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:52.199 [2024-12-06 13:10:58.583046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.199 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.458 BaseBdev2 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.458 [ 00:16:52.458 { 00:16:52.458 "name": "BaseBdev2", 00:16:52.458 "aliases": [ 00:16:52.458 "1665f879-ee1a-4e32-a1f3-bed92bc0f615" 00:16:52.458 ], 00:16:52.458 "product_name": "Malloc disk", 00:16:52.458 "block_size": 512, 00:16:52.458 "num_blocks": 65536, 00:16:52.458 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:52.458 "assigned_rate_limits": { 00:16:52.458 "rw_ios_per_sec": 0, 00:16:52.458 "rw_mbytes_per_sec": 0, 00:16:52.458 "r_mbytes_per_sec": 0, 00:16:52.458 "w_mbytes_per_sec": 0 00:16:52.458 }, 00:16:52.458 "claimed": false, 00:16:52.458 "zoned": false, 00:16:52.458 "supported_io_types": { 00:16:52.458 "read": true, 00:16:52.458 "write": true, 00:16:52.458 "unmap": true, 00:16:52.458 "flush": true, 00:16:52.458 "reset": true, 00:16:52.458 "nvme_admin": false, 00:16:52.458 "nvme_io": false, 00:16:52.458 "nvme_io_md": false, 00:16:52.458 "write_zeroes": true, 00:16:52.458 "zcopy": true, 00:16:52.458 "get_zone_info": false, 00:16:52.458 "zone_management": false, 00:16:52.458 "zone_append": false, 00:16:52.458 "compare": false, 00:16:52.458 "compare_and_write": false, 00:16:52.458 "abort": true, 00:16:52.458 "seek_hole": false, 00:16:52.458 "seek_data": false, 00:16:52.458 "copy": true, 00:16:52.458 "nvme_iov_md": false 00:16:52.458 }, 00:16:52.458 "memory_domains": [ 00:16:52.458 { 00:16:52.458 "dma_device_id": "system", 00:16:52.458 "dma_device_type": 1 00:16:52.458 }, 00:16:52.458 { 00:16:52.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.458 "dma_device_type": 2 00:16:52.458 } 00:16:52.458 ], 00:16:52.458 "driver_specific": {} 00:16:52.458 } 00:16:52.458 ] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.458 BaseBdev3 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.458 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.459 [ 00:16:52.459 { 00:16:52.459 "name": "BaseBdev3", 00:16:52.459 "aliases": [ 00:16:52.459 "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6" 00:16:52.459 ], 00:16:52.459 "product_name": "Malloc disk", 00:16:52.459 "block_size": 512, 00:16:52.459 "num_blocks": 65536, 00:16:52.459 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:52.459 "assigned_rate_limits": { 00:16:52.459 "rw_ios_per_sec": 0, 00:16:52.459 "rw_mbytes_per_sec": 0, 00:16:52.459 "r_mbytes_per_sec": 0, 00:16:52.459 "w_mbytes_per_sec": 0 00:16:52.459 }, 00:16:52.459 "claimed": false, 00:16:52.459 "zoned": false, 00:16:52.459 "supported_io_types": { 00:16:52.459 "read": true, 00:16:52.459 "write": true, 00:16:52.459 "unmap": true, 00:16:52.459 "flush": true, 00:16:52.459 "reset": true, 00:16:52.459 "nvme_admin": false, 00:16:52.459 "nvme_io": false, 00:16:52.459 "nvme_io_md": false, 00:16:52.459 "write_zeroes": true, 00:16:52.459 "zcopy": true, 00:16:52.459 "get_zone_info": false, 00:16:52.459 "zone_management": false, 00:16:52.459 "zone_append": false, 00:16:52.459 "compare": false, 00:16:52.459 "compare_and_write": false, 00:16:52.459 "abort": true, 00:16:52.459 "seek_hole": false, 00:16:52.459 "seek_data": false, 00:16:52.459 "copy": true, 00:16:52.459 "nvme_iov_md": false 00:16:52.459 }, 00:16:52.459 "memory_domains": [ 00:16:52.459 { 00:16:52.459 "dma_device_id": "system", 00:16:52.459 "dma_device_type": 1 00:16:52.459 }, 00:16:52.459 { 00:16:52.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.459 "dma_device_type": 2 00:16:52.459 } 00:16:52.459 ], 00:16:52.459 "driver_specific": {} 00:16:52.459 } 00:16:52.459 ] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.459 BaseBdev4 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.459 [ 00:16:52.459 { 00:16:52.459 "name": "BaseBdev4", 00:16:52.459 "aliases": [ 00:16:52.459 "fbfb939b-4d09-4c57-a82d-f73cf5858fc0" 00:16:52.459 ], 00:16:52.459 "product_name": "Malloc disk", 00:16:52.459 "block_size": 512, 00:16:52.459 "num_blocks": 65536, 00:16:52.459 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:52.459 "assigned_rate_limits": { 00:16:52.459 "rw_ios_per_sec": 0, 00:16:52.459 "rw_mbytes_per_sec": 0, 00:16:52.459 "r_mbytes_per_sec": 0, 00:16:52.459 "w_mbytes_per_sec": 0 00:16:52.459 }, 00:16:52.459 "claimed": false, 00:16:52.459 "zoned": false, 00:16:52.459 "supported_io_types": { 00:16:52.459 "read": true, 00:16:52.459 "write": true, 00:16:52.459 "unmap": true, 00:16:52.459 "flush": true, 00:16:52.459 "reset": true, 00:16:52.459 "nvme_admin": false, 00:16:52.459 "nvme_io": false, 00:16:52.459 "nvme_io_md": false, 00:16:52.459 "write_zeroes": true, 00:16:52.459 "zcopy": true, 00:16:52.459 "get_zone_info": false, 00:16:52.459 "zone_management": false, 00:16:52.459 "zone_append": false, 00:16:52.459 "compare": false, 00:16:52.459 "compare_and_write": false, 00:16:52.459 "abort": true, 00:16:52.459 "seek_hole": false, 00:16:52.459 "seek_data": false, 00:16:52.459 "copy": true, 00:16:52.459 "nvme_iov_md": false 00:16:52.459 }, 00:16:52.459 "memory_domains": [ 00:16:52.459 { 00:16:52.459 "dma_device_id": "system", 00:16:52.459 "dma_device_type": 1 00:16:52.459 }, 00:16:52.459 { 00:16:52.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.459 "dma_device_type": 2 00:16:52.459 } 00:16:52.459 ], 00:16:52.459 "driver_specific": {} 00:16:52.459 } 00:16:52.459 ] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.459 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.459 [2024-12-06 13:10:58.979461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.459 [2024-12-06 13:10:58.979720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.459 [2024-12-06 13:10:58.979779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.459 [2024-12-06 13:10:58.982444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.459 [2024-12-06 13:10:58.982532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.717 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.717 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.718 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.718 "name": "Existed_Raid", 00:16:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.718 "strip_size_kb": 64, 00:16:52.718 "state": "configuring", 00:16:52.718 "raid_level": "raid0", 00:16:52.718 "superblock": false, 00:16:52.718 "num_base_bdevs": 4, 00:16:52.718 "num_base_bdevs_discovered": 3, 00:16:52.718 "num_base_bdevs_operational": 4, 00:16:52.718 "base_bdevs_list": [ 00:16:52.718 { 00:16:52.718 "name": "BaseBdev1", 00:16:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.718 "is_configured": false, 00:16:52.718 "data_offset": 0, 00:16:52.718 "data_size": 0 00:16:52.718 }, 00:16:52.718 { 00:16:52.718 "name": "BaseBdev2", 00:16:52.718 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:52.718 "is_configured": true, 00:16:52.718 "data_offset": 0, 00:16:52.718 "data_size": 65536 00:16:52.718 }, 00:16:52.718 { 00:16:52.718 "name": "BaseBdev3", 00:16:52.718 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:52.718 "is_configured": true, 00:16:52.718 "data_offset": 0, 00:16:52.718 "data_size": 65536 00:16:52.718 }, 00:16:52.718 { 00:16:52.718 "name": "BaseBdev4", 00:16:52.718 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:52.718 "is_configured": true, 00:16:52.718 "data_offset": 0, 00:16:52.718 "data_size": 65536 00:16:52.718 } 00:16:52.718 ] 00:16:52.718 }' 00:16:52.718 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.718 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.286 [2024-12-06 13:10:59.555648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.286 "name": "Existed_Raid", 00:16:53.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.286 "strip_size_kb": 64, 00:16:53.286 "state": "configuring", 00:16:53.286 "raid_level": "raid0", 00:16:53.286 "superblock": false, 00:16:53.286 "num_base_bdevs": 4, 00:16:53.286 "num_base_bdevs_discovered": 2, 00:16:53.286 "num_base_bdevs_operational": 4, 00:16:53.286 "base_bdevs_list": [ 00:16:53.286 { 00:16:53.286 "name": "BaseBdev1", 00:16:53.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.286 "is_configured": false, 00:16:53.286 "data_offset": 0, 00:16:53.286 "data_size": 0 00:16:53.286 }, 00:16:53.286 { 00:16:53.286 "name": null, 00:16:53.286 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:53.286 "is_configured": false, 00:16:53.286 "data_offset": 0, 00:16:53.286 "data_size": 65536 00:16:53.286 }, 00:16:53.286 { 00:16:53.286 "name": "BaseBdev3", 00:16:53.286 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:53.286 "is_configured": true, 00:16:53.286 "data_offset": 0, 00:16:53.286 "data_size": 65536 00:16:53.286 }, 00:16:53.286 { 00:16:53.286 "name": "BaseBdev4", 00:16:53.286 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:53.286 "is_configured": true, 00:16:53.286 "data_offset": 0, 00:16:53.286 "data_size": 65536 00:16:53.286 } 00:16:53.286 ] 00:16:53.286 }' 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.286 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 [2024-12-06 13:11:00.186689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.854 BaseBdev1 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.854 [ 00:16:53.854 { 00:16:53.854 "name": "BaseBdev1", 00:16:53.854 "aliases": [ 00:16:53.854 "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748" 00:16:53.854 ], 00:16:53.854 "product_name": "Malloc disk", 00:16:53.854 "block_size": 512, 00:16:53.854 "num_blocks": 65536, 00:16:53.854 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:53.854 "assigned_rate_limits": { 00:16:53.854 "rw_ios_per_sec": 0, 00:16:53.854 "rw_mbytes_per_sec": 0, 00:16:53.854 "r_mbytes_per_sec": 0, 00:16:53.854 "w_mbytes_per_sec": 0 00:16:53.854 }, 00:16:53.854 "claimed": true, 00:16:53.854 "claim_type": "exclusive_write", 00:16:53.854 "zoned": false, 00:16:53.854 "supported_io_types": { 00:16:53.854 "read": true, 00:16:53.854 "write": true, 00:16:53.854 "unmap": true, 00:16:53.854 "flush": true, 00:16:53.854 "reset": true, 00:16:53.854 "nvme_admin": false, 00:16:53.854 "nvme_io": false, 00:16:53.854 "nvme_io_md": false, 00:16:53.854 "write_zeroes": true, 00:16:53.854 "zcopy": true, 00:16:53.854 "get_zone_info": false, 00:16:53.854 "zone_management": false, 00:16:53.854 "zone_append": false, 00:16:53.854 "compare": false, 00:16:53.854 "compare_and_write": false, 00:16:53.854 "abort": true, 00:16:53.854 "seek_hole": false, 00:16:53.854 "seek_data": false, 00:16:53.854 "copy": true, 00:16:53.854 "nvme_iov_md": false 00:16:53.854 }, 00:16:53.854 "memory_domains": [ 00:16:53.854 { 00:16:53.854 "dma_device_id": "system", 00:16:53.854 "dma_device_type": 1 00:16:53.854 }, 00:16:53.854 { 00:16:53.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.854 "dma_device_type": 2 00:16:53.854 } 00:16:53.854 ], 00:16:53.854 "driver_specific": {} 00:16:53.854 } 00:16:53.854 ] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.854 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.855 "name": "Existed_Raid", 00:16:53.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.855 "strip_size_kb": 64, 00:16:53.855 "state": "configuring", 00:16:53.855 "raid_level": "raid0", 00:16:53.855 "superblock": false, 00:16:53.855 "num_base_bdevs": 4, 00:16:53.855 "num_base_bdevs_discovered": 3, 00:16:53.855 "num_base_bdevs_operational": 4, 00:16:53.855 "base_bdevs_list": [ 00:16:53.855 { 00:16:53.855 "name": "BaseBdev1", 00:16:53.855 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:53.855 "is_configured": true, 00:16:53.855 "data_offset": 0, 00:16:53.855 "data_size": 65536 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "name": null, 00:16:53.855 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:53.855 "is_configured": false, 00:16:53.855 "data_offset": 0, 00:16:53.855 "data_size": 65536 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "name": "BaseBdev3", 00:16:53.855 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:53.855 "is_configured": true, 00:16:53.855 "data_offset": 0, 00:16:53.855 "data_size": 65536 00:16:53.855 }, 00:16:53.855 { 00:16:53.855 "name": "BaseBdev4", 00:16:53.855 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:53.855 "is_configured": true, 00:16:53.855 "data_offset": 0, 00:16:53.855 "data_size": 65536 00:16:53.855 } 00:16:53.855 ] 00:16:53.855 }' 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.855 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 [2024-12-06 13:11:00.803030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.422 "name": "Existed_Raid", 00:16:54.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.422 "strip_size_kb": 64, 00:16:54.422 "state": "configuring", 00:16:54.422 "raid_level": "raid0", 00:16:54.422 "superblock": false, 00:16:54.422 "num_base_bdevs": 4, 00:16:54.422 "num_base_bdevs_discovered": 2, 00:16:54.422 "num_base_bdevs_operational": 4, 00:16:54.422 "base_bdevs_list": [ 00:16:54.422 { 00:16:54.422 "name": "BaseBdev1", 00:16:54.422 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:54.422 "is_configured": true, 00:16:54.422 "data_offset": 0, 00:16:54.422 "data_size": 65536 00:16:54.422 }, 00:16:54.422 { 00:16:54.422 "name": null, 00:16:54.422 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:54.422 "is_configured": false, 00:16:54.422 "data_offset": 0, 00:16:54.422 "data_size": 65536 00:16:54.422 }, 00:16:54.422 { 00:16:54.422 "name": null, 00:16:54.422 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:54.422 "is_configured": false, 00:16:54.422 "data_offset": 0, 00:16:54.422 "data_size": 65536 00:16:54.422 }, 00:16:54.422 { 00:16:54.422 "name": "BaseBdev4", 00:16:54.422 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:54.422 "is_configured": true, 00:16:54.422 "data_offset": 0, 00:16:54.422 "data_size": 65536 00:16:54.422 } 00:16:54.422 ] 00:16:54.422 }' 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.422 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.989 [2024-12-06 13:11:01.399168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.989 "name": "Existed_Raid", 00:16:54.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.989 "strip_size_kb": 64, 00:16:54.989 "state": "configuring", 00:16:54.989 "raid_level": "raid0", 00:16:54.989 "superblock": false, 00:16:54.989 "num_base_bdevs": 4, 00:16:54.989 "num_base_bdevs_discovered": 3, 00:16:54.989 "num_base_bdevs_operational": 4, 00:16:54.989 "base_bdevs_list": [ 00:16:54.989 { 00:16:54.989 "name": "BaseBdev1", 00:16:54.989 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:54.989 "is_configured": true, 00:16:54.989 "data_offset": 0, 00:16:54.989 "data_size": 65536 00:16:54.989 }, 00:16:54.989 { 00:16:54.989 "name": null, 00:16:54.989 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:54.989 "is_configured": false, 00:16:54.989 "data_offset": 0, 00:16:54.989 "data_size": 65536 00:16:54.989 }, 00:16:54.989 { 00:16:54.989 "name": "BaseBdev3", 00:16:54.989 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:54.989 "is_configured": true, 00:16:54.989 "data_offset": 0, 00:16:54.989 "data_size": 65536 00:16:54.989 }, 00:16:54.989 { 00:16:54.989 "name": "BaseBdev4", 00:16:54.989 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:54.989 "is_configured": true, 00:16:54.989 "data_offset": 0, 00:16:54.989 "data_size": 65536 00:16:54.989 } 00:16:54.989 ] 00:16:54.989 }' 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.989 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.558 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.558 [2024-12-06 13:11:01.967387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.558 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.818 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.818 "name": "Existed_Raid", 00:16:55.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.818 "strip_size_kb": 64, 00:16:55.818 "state": "configuring", 00:16:55.818 "raid_level": "raid0", 00:16:55.818 "superblock": false, 00:16:55.818 "num_base_bdevs": 4, 00:16:55.818 "num_base_bdevs_discovered": 2, 00:16:55.818 "num_base_bdevs_operational": 4, 00:16:55.818 "base_bdevs_list": [ 00:16:55.818 { 00:16:55.818 "name": null, 00:16:55.818 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:55.818 "is_configured": false, 00:16:55.818 "data_offset": 0, 00:16:55.818 "data_size": 65536 00:16:55.818 }, 00:16:55.818 { 00:16:55.818 "name": null, 00:16:55.818 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:55.818 "is_configured": false, 00:16:55.818 "data_offset": 0, 00:16:55.818 "data_size": 65536 00:16:55.818 }, 00:16:55.818 { 00:16:55.818 "name": "BaseBdev3", 00:16:55.818 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:55.818 "is_configured": true, 00:16:55.818 "data_offset": 0, 00:16:55.818 "data_size": 65536 00:16:55.818 }, 00:16:55.818 { 00:16:55.818 "name": "BaseBdev4", 00:16:55.818 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:55.818 "is_configured": true, 00:16:55.818 "data_offset": 0, 00:16:55.818 "data_size": 65536 00:16:55.818 } 00:16:55.818 ] 00:16:55.818 }' 00:16:55.818 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.818 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.077 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.077 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.077 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.077 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.336 [2024-12-06 13:11:02.643866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.336 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.337 "name": "Existed_Raid", 00:16:56.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.337 "strip_size_kb": 64, 00:16:56.337 "state": "configuring", 00:16:56.337 "raid_level": "raid0", 00:16:56.337 "superblock": false, 00:16:56.337 "num_base_bdevs": 4, 00:16:56.337 "num_base_bdevs_discovered": 3, 00:16:56.337 "num_base_bdevs_operational": 4, 00:16:56.337 "base_bdevs_list": [ 00:16:56.337 { 00:16:56.337 "name": null, 00:16:56.337 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:56.337 "is_configured": false, 00:16:56.337 "data_offset": 0, 00:16:56.337 "data_size": 65536 00:16:56.337 }, 00:16:56.337 { 00:16:56.337 "name": "BaseBdev2", 00:16:56.337 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:56.337 "is_configured": true, 00:16:56.337 "data_offset": 0, 00:16:56.337 "data_size": 65536 00:16:56.337 }, 00:16:56.337 { 00:16:56.337 "name": "BaseBdev3", 00:16:56.337 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:56.337 "is_configured": true, 00:16:56.337 "data_offset": 0, 00:16:56.337 "data_size": 65536 00:16:56.337 }, 00:16:56.337 { 00:16:56.337 "name": "BaseBdev4", 00:16:56.337 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:56.337 "is_configured": true, 00:16:56.337 "data_offset": 0, 00:16:56.337 "data_size": 65536 00:16:56.337 } 00:16:56.337 ] 00:16:56.337 }' 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.337 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.906 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.907 [2024-12-06 13:11:03.309822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:56.907 [2024-12-06 13:11:03.309922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:56.907 [2024-12-06 13:11:03.309933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:56.907 [2024-12-06 13:11:03.310288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:56.907 [2024-12-06 13:11:03.310521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:56.907 [2024-12-06 13:11:03.310543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:56.907 [2024-12-06 13:11:03.310946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.907 NewBaseBdev 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.907 [ 00:16:56.907 { 00:16:56.907 "name": "NewBaseBdev", 00:16:56.907 "aliases": [ 00:16:56.907 "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748" 00:16:56.907 ], 00:16:56.907 "product_name": "Malloc disk", 00:16:56.907 "block_size": 512, 00:16:56.907 "num_blocks": 65536, 00:16:56.907 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:56.907 "assigned_rate_limits": { 00:16:56.907 "rw_ios_per_sec": 0, 00:16:56.907 "rw_mbytes_per_sec": 0, 00:16:56.907 "r_mbytes_per_sec": 0, 00:16:56.907 "w_mbytes_per_sec": 0 00:16:56.907 }, 00:16:56.907 "claimed": true, 00:16:56.907 "claim_type": "exclusive_write", 00:16:56.907 "zoned": false, 00:16:56.907 "supported_io_types": { 00:16:56.907 "read": true, 00:16:56.907 "write": true, 00:16:56.907 "unmap": true, 00:16:56.907 "flush": true, 00:16:56.907 "reset": true, 00:16:56.907 "nvme_admin": false, 00:16:56.907 "nvme_io": false, 00:16:56.907 "nvme_io_md": false, 00:16:56.907 "write_zeroes": true, 00:16:56.907 "zcopy": true, 00:16:56.907 "get_zone_info": false, 00:16:56.907 "zone_management": false, 00:16:56.907 "zone_append": false, 00:16:56.907 "compare": false, 00:16:56.907 "compare_and_write": false, 00:16:56.907 "abort": true, 00:16:56.907 "seek_hole": false, 00:16:56.907 "seek_data": false, 00:16:56.907 "copy": true, 00:16:56.907 "nvme_iov_md": false 00:16:56.907 }, 00:16:56.907 "memory_domains": [ 00:16:56.907 { 00:16:56.907 "dma_device_id": "system", 00:16:56.907 "dma_device_type": 1 00:16:56.907 }, 00:16:56.907 { 00:16:56.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.907 "dma_device_type": 2 00:16:56.907 } 00:16:56.907 ], 00:16:56.907 "driver_specific": {} 00:16:56.907 } 00:16:56.907 ] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.907 "name": "Existed_Raid", 00:16:56.907 "uuid": "ae0aef6c-370e-4b92-8a4f-d527a4b20346", 00:16:56.907 "strip_size_kb": 64, 00:16:56.907 "state": "online", 00:16:56.907 "raid_level": "raid0", 00:16:56.907 "superblock": false, 00:16:56.907 "num_base_bdevs": 4, 00:16:56.907 "num_base_bdevs_discovered": 4, 00:16:56.907 "num_base_bdevs_operational": 4, 00:16:56.907 "base_bdevs_list": [ 00:16:56.907 { 00:16:56.907 "name": "NewBaseBdev", 00:16:56.907 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:56.907 "is_configured": true, 00:16:56.907 "data_offset": 0, 00:16:56.907 "data_size": 65536 00:16:56.907 }, 00:16:56.907 { 00:16:56.907 "name": "BaseBdev2", 00:16:56.907 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:56.907 "is_configured": true, 00:16:56.907 "data_offset": 0, 00:16:56.907 "data_size": 65536 00:16:56.907 }, 00:16:56.907 { 00:16:56.907 "name": "BaseBdev3", 00:16:56.907 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:56.907 "is_configured": true, 00:16:56.907 "data_offset": 0, 00:16:56.907 "data_size": 65536 00:16:56.907 }, 00:16:56.907 { 00:16:56.907 "name": "BaseBdev4", 00:16:56.907 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:56.907 "is_configured": true, 00:16:56.907 "data_offset": 0, 00:16:56.907 "data_size": 65536 00:16:56.907 } 00:16:56.907 ] 00:16:56.907 }' 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.907 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.474 [2024-12-06 13:11:03.842607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.474 "name": "Existed_Raid", 00:16:57.474 "aliases": [ 00:16:57.474 "ae0aef6c-370e-4b92-8a4f-d527a4b20346" 00:16:57.474 ], 00:16:57.474 "product_name": "Raid Volume", 00:16:57.474 "block_size": 512, 00:16:57.474 "num_blocks": 262144, 00:16:57.474 "uuid": "ae0aef6c-370e-4b92-8a4f-d527a4b20346", 00:16:57.474 "assigned_rate_limits": { 00:16:57.474 "rw_ios_per_sec": 0, 00:16:57.474 "rw_mbytes_per_sec": 0, 00:16:57.474 "r_mbytes_per_sec": 0, 00:16:57.474 "w_mbytes_per_sec": 0 00:16:57.474 }, 00:16:57.474 "claimed": false, 00:16:57.474 "zoned": false, 00:16:57.474 "supported_io_types": { 00:16:57.474 "read": true, 00:16:57.474 "write": true, 00:16:57.474 "unmap": true, 00:16:57.474 "flush": true, 00:16:57.474 "reset": true, 00:16:57.474 "nvme_admin": false, 00:16:57.474 "nvme_io": false, 00:16:57.474 "nvme_io_md": false, 00:16:57.474 "write_zeroes": true, 00:16:57.474 "zcopy": false, 00:16:57.474 "get_zone_info": false, 00:16:57.474 "zone_management": false, 00:16:57.474 "zone_append": false, 00:16:57.474 "compare": false, 00:16:57.474 "compare_and_write": false, 00:16:57.474 "abort": false, 00:16:57.474 "seek_hole": false, 00:16:57.474 "seek_data": false, 00:16:57.474 "copy": false, 00:16:57.474 "nvme_iov_md": false 00:16:57.474 }, 00:16:57.474 "memory_domains": [ 00:16:57.474 { 00:16:57.474 "dma_device_id": "system", 00:16:57.474 "dma_device_type": 1 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.474 "dma_device_type": 2 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "system", 00:16:57.474 "dma_device_type": 1 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.474 "dma_device_type": 2 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "system", 00:16:57.474 "dma_device_type": 1 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.474 "dma_device_type": 2 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "system", 00:16:57.474 "dma_device_type": 1 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.474 "dma_device_type": 2 00:16:57.474 } 00:16:57.474 ], 00:16:57.474 "driver_specific": { 00:16:57.474 "raid": { 00:16:57.474 "uuid": "ae0aef6c-370e-4b92-8a4f-d527a4b20346", 00:16:57.474 "strip_size_kb": 64, 00:16:57.474 "state": "online", 00:16:57.474 "raid_level": "raid0", 00:16:57.474 "superblock": false, 00:16:57.474 "num_base_bdevs": 4, 00:16:57.474 "num_base_bdevs_discovered": 4, 00:16:57.474 "num_base_bdevs_operational": 4, 00:16:57.474 "base_bdevs_list": [ 00:16:57.474 { 00:16:57.474 "name": "NewBaseBdev", 00:16:57.474 "uuid": "aa2712ef-d9bf-46f4-ba3b-ff0c1de8d748", 00:16:57.474 "is_configured": true, 00:16:57.474 "data_offset": 0, 00:16:57.474 "data_size": 65536 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "name": "BaseBdev2", 00:16:57.474 "uuid": "1665f879-ee1a-4e32-a1f3-bed92bc0f615", 00:16:57.474 "is_configured": true, 00:16:57.474 "data_offset": 0, 00:16:57.474 "data_size": 65536 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "name": "BaseBdev3", 00:16:57.474 "uuid": "e2dae974-0864-4e2f-b5a5-c50f8fcc6da6", 00:16:57.474 "is_configured": true, 00:16:57.474 "data_offset": 0, 00:16:57.474 "data_size": 65536 00:16:57.474 }, 00:16:57.474 { 00:16:57.474 "name": "BaseBdev4", 00:16:57.474 "uuid": "fbfb939b-4d09-4c57-a82d-f73cf5858fc0", 00:16:57.474 "is_configured": true, 00:16:57.474 "data_offset": 0, 00:16:57.474 "data_size": 65536 00:16:57.474 } 00:16:57.474 ] 00:16:57.474 } 00:16:57.474 } 00:16:57.474 }' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:57.474 BaseBdev2 00:16:57.474 BaseBdev3 00:16:57.474 BaseBdev4' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.474 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.733 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 [2024-12-06 13:11:04.218177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.733 [2024-12-06 13:11:04.218220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.733 [2024-12-06 13:11:04.218362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.733 [2024-12-06 13:11:04.218477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.733 [2024-12-06 13:11:04.218496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69730 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69730 ']' 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69730 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.734 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69730 00:16:57.992 killing process with pid 69730 00:16:57.992 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.992 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.992 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69730' 00:16:57.992 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69730 00:16:57.992 [2024-12-06 13:11:04.260267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.992 13:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69730 00:16:58.251 [2024-12-06 13:11:04.609416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:59.624 00:16:59.624 real 0m13.256s 00:16:59.624 user 0m21.867s 00:16:59.624 sys 0m1.936s 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 ************************************ 00:16:59.624 END TEST raid_state_function_test 00:16:59.624 ************************************ 00:16:59.624 13:11:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:59.624 13:11:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:59.624 13:11:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.624 13:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 ************************************ 00:16:59.624 START TEST raid_state_function_test_sb 00:16:59.624 ************************************ 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:59.624 Process raid pid: 70420 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70420 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70420' 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70420 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70420 ']' 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.624 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.624 [2024-12-06 13:11:05.873867] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:16:59.624 [2024-12-06 13:11:05.874272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.624 [2024-12-06 13:11:06.054921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.883 [2024-12-06 13:11:06.230615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.142 [2024-12-06 13:11:06.463692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.142 [2024-12-06 13:11:06.463763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.401 [2024-12-06 13:11:06.853935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.401 [2024-12-06 13:11:06.854060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.401 [2024-12-06 13:11:06.854099] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.401 [2024-12-06 13:11:06.854130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.401 [2024-12-06 13:11:06.854140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.401 [2024-12-06 13:11:06.854169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.401 [2024-12-06 13:11:06.854179] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.401 [2024-12-06 13:11:06.854193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.401 "name": "Existed_Raid", 00:17:00.401 "uuid": "aba30bed-ee6d-4339-9ac6-73d61fa1b959", 00:17:00.401 "strip_size_kb": 64, 00:17:00.401 "state": "configuring", 00:17:00.401 "raid_level": "raid0", 00:17:00.401 "superblock": true, 00:17:00.401 "num_base_bdevs": 4, 00:17:00.401 "num_base_bdevs_discovered": 0, 00:17:00.401 "num_base_bdevs_operational": 4, 00:17:00.401 "base_bdevs_list": [ 00:17:00.401 { 00:17:00.401 "name": "BaseBdev1", 00:17:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.401 "is_configured": false, 00:17:00.401 "data_offset": 0, 00:17:00.401 "data_size": 0 00:17:00.401 }, 00:17:00.401 { 00:17:00.401 "name": "BaseBdev2", 00:17:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.401 "is_configured": false, 00:17:00.401 "data_offset": 0, 00:17:00.401 "data_size": 0 00:17:00.401 }, 00:17:00.401 { 00:17:00.401 "name": "BaseBdev3", 00:17:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.401 "is_configured": false, 00:17:00.401 "data_offset": 0, 00:17:00.401 "data_size": 0 00:17:00.401 }, 00:17:00.401 { 00:17:00.401 "name": "BaseBdev4", 00:17:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.401 "is_configured": false, 00:17:00.401 "data_offset": 0, 00:17:00.401 "data_size": 0 00:17:00.401 } 00:17:00.401 ] 00:17:00.401 }' 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.401 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 [2024-12-06 13:11:07.402008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.968 [2024-12-06 13:11:07.402096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 [2024-12-06 13:11:07.409997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.968 [2024-12-06 13:11:07.410070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.968 [2024-12-06 13:11:07.410086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.968 [2024-12-06 13:11:07.410102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.968 [2024-12-06 13:11:07.410112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.968 [2024-12-06 13:11:07.410126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.968 [2024-12-06 13:11:07.410136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.968 [2024-12-06 13:11:07.410151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 [2024-12-06 13:11:07.459459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.968 BaseBdev1 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.968 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.968 [ 00:17:00.968 { 00:17:00.968 "name": "BaseBdev1", 00:17:00.968 "aliases": [ 00:17:00.968 "d38bc5b1-6742-4608-a416-9677f8438ff8" 00:17:00.968 ], 00:17:00.968 "product_name": "Malloc disk", 00:17:00.968 "block_size": 512, 00:17:00.968 "num_blocks": 65536, 00:17:00.968 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:00.968 "assigned_rate_limits": { 00:17:00.968 "rw_ios_per_sec": 0, 00:17:00.968 "rw_mbytes_per_sec": 0, 00:17:00.968 "r_mbytes_per_sec": 0, 00:17:00.968 "w_mbytes_per_sec": 0 00:17:00.968 }, 00:17:00.968 "claimed": true, 00:17:00.968 "claim_type": "exclusive_write", 00:17:00.968 "zoned": false, 00:17:00.968 "supported_io_types": { 00:17:00.968 "read": true, 00:17:00.968 "write": true, 00:17:00.968 "unmap": true, 00:17:00.968 "flush": true, 00:17:00.968 "reset": true, 00:17:00.968 "nvme_admin": false, 00:17:00.968 "nvme_io": false, 00:17:00.968 "nvme_io_md": false, 00:17:00.968 "write_zeroes": true, 00:17:00.968 "zcopy": true, 00:17:00.968 "get_zone_info": false, 00:17:00.968 "zone_management": false, 00:17:01.227 "zone_append": false, 00:17:01.227 "compare": false, 00:17:01.227 "compare_and_write": false, 00:17:01.227 "abort": true, 00:17:01.227 "seek_hole": false, 00:17:01.227 "seek_data": false, 00:17:01.227 "copy": true, 00:17:01.227 "nvme_iov_md": false 00:17:01.227 }, 00:17:01.227 "memory_domains": [ 00:17:01.227 { 00:17:01.227 "dma_device_id": "system", 00:17:01.227 "dma_device_type": 1 00:17:01.227 }, 00:17:01.227 { 00:17:01.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.227 "dma_device_type": 2 00:17:01.227 } 00:17:01.227 ], 00:17:01.227 "driver_specific": {} 00:17:01.227 } 00:17:01.227 ] 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.227 "name": "Existed_Raid", 00:17:01.227 "uuid": "38ac6191-006a-41a2-a3c6-4529f1ea5c4d", 00:17:01.227 "strip_size_kb": 64, 00:17:01.227 "state": "configuring", 00:17:01.227 "raid_level": "raid0", 00:17:01.227 "superblock": true, 00:17:01.227 "num_base_bdevs": 4, 00:17:01.227 "num_base_bdevs_discovered": 1, 00:17:01.227 "num_base_bdevs_operational": 4, 00:17:01.227 "base_bdevs_list": [ 00:17:01.227 { 00:17:01.227 "name": "BaseBdev1", 00:17:01.227 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:01.227 "is_configured": true, 00:17:01.227 "data_offset": 2048, 00:17:01.227 "data_size": 63488 00:17:01.227 }, 00:17:01.227 { 00:17:01.227 "name": "BaseBdev2", 00:17:01.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.227 "is_configured": false, 00:17:01.227 "data_offset": 0, 00:17:01.227 "data_size": 0 00:17:01.227 }, 00:17:01.227 { 00:17:01.227 "name": "BaseBdev3", 00:17:01.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.227 "is_configured": false, 00:17:01.227 "data_offset": 0, 00:17:01.227 "data_size": 0 00:17:01.227 }, 00:17:01.227 { 00:17:01.227 "name": "BaseBdev4", 00:17:01.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.227 "is_configured": false, 00:17:01.227 "data_offset": 0, 00:17:01.227 "data_size": 0 00:17:01.227 } 00:17:01.227 ] 00:17:01.227 }' 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.227 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 [2024-12-06 13:11:07.979725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.485 [2024-12-06 13:11:07.980010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 [2024-12-06 13:11:07.987798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.485 [2024-12-06 13:11:07.990610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.485 [2024-12-06 13:11:07.990831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.485 [2024-12-06 13:11:07.990859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.485 [2024-12-06 13:11:07.990885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.485 [2024-12-06 13:11:07.990895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:01.485 [2024-12-06 13:11:07.990909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.485 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.485 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.744 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.744 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.744 "name": "Existed_Raid", 00:17:01.744 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:01.744 "strip_size_kb": 64, 00:17:01.744 "state": "configuring", 00:17:01.744 "raid_level": "raid0", 00:17:01.744 "superblock": true, 00:17:01.744 "num_base_bdevs": 4, 00:17:01.744 "num_base_bdevs_discovered": 1, 00:17:01.744 "num_base_bdevs_operational": 4, 00:17:01.744 "base_bdevs_list": [ 00:17:01.744 { 00:17:01.744 "name": "BaseBdev1", 00:17:01.744 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:01.744 "is_configured": true, 00:17:01.744 "data_offset": 2048, 00:17:01.744 "data_size": 63488 00:17:01.744 }, 00:17:01.744 { 00:17:01.744 "name": "BaseBdev2", 00:17:01.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.744 "is_configured": false, 00:17:01.744 "data_offset": 0, 00:17:01.744 "data_size": 0 00:17:01.744 }, 00:17:01.744 { 00:17:01.744 "name": "BaseBdev3", 00:17:01.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.744 "is_configured": false, 00:17:01.744 "data_offset": 0, 00:17:01.744 "data_size": 0 00:17:01.744 }, 00:17:01.744 { 00:17:01.744 "name": "BaseBdev4", 00:17:01.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.744 "is_configured": false, 00:17:01.744 "data_offset": 0, 00:17:01.744 "data_size": 0 00:17:01.744 } 00:17:01.744 ] 00:17:01.744 }' 00:17:01.744 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.744 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.002 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.002 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.002 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.261 [2024-12-06 13:11:08.532206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.261 BaseBdev2 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.261 [ 00:17:02.261 { 00:17:02.261 "name": "BaseBdev2", 00:17:02.261 "aliases": [ 00:17:02.261 "1182761e-d253-4810-abc8-7cbe3a7566f0" 00:17:02.261 ], 00:17:02.261 "product_name": "Malloc disk", 00:17:02.261 "block_size": 512, 00:17:02.261 "num_blocks": 65536, 00:17:02.261 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:02.261 "assigned_rate_limits": { 00:17:02.261 "rw_ios_per_sec": 0, 00:17:02.261 "rw_mbytes_per_sec": 0, 00:17:02.261 "r_mbytes_per_sec": 0, 00:17:02.261 "w_mbytes_per_sec": 0 00:17:02.261 }, 00:17:02.261 "claimed": true, 00:17:02.261 "claim_type": "exclusive_write", 00:17:02.261 "zoned": false, 00:17:02.261 "supported_io_types": { 00:17:02.261 "read": true, 00:17:02.261 "write": true, 00:17:02.261 "unmap": true, 00:17:02.261 "flush": true, 00:17:02.261 "reset": true, 00:17:02.261 "nvme_admin": false, 00:17:02.261 "nvme_io": false, 00:17:02.261 "nvme_io_md": false, 00:17:02.261 "write_zeroes": true, 00:17:02.261 "zcopy": true, 00:17:02.261 "get_zone_info": false, 00:17:02.261 "zone_management": false, 00:17:02.261 "zone_append": false, 00:17:02.261 "compare": false, 00:17:02.261 "compare_and_write": false, 00:17:02.261 "abort": true, 00:17:02.261 "seek_hole": false, 00:17:02.261 "seek_data": false, 00:17:02.261 "copy": true, 00:17:02.261 "nvme_iov_md": false 00:17:02.261 }, 00:17:02.261 "memory_domains": [ 00:17:02.261 { 00:17:02.261 "dma_device_id": "system", 00:17:02.261 "dma_device_type": 1 00:17:02.261 }, 00:17:02.261 { 00:17:02.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.261 "dma_device_type": 2 00:17:02.261 } 00:17:02.261 ], 00:17:02.261 "driver_specific": {} 00:17:02.261 } 00:17:02.261 ] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.261 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.261 "name": "Existed_Raid", 00:17:02.261 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:02.261 "strip_size_kb": 64, 00:17:02.261 "state": "configuring", 00:17:02.261 "raid_level": "raid0", 00:17:02.261 "superblock": true, 00:17:02.261 "num_base_bdevs": 4, 00:17:02.261 "num_base_bdevs_discovered": 2, 00:17:02.261 "num_base_bdevs_operational": 4, 00:17:02.261 "base_bdevs_list": [ 00:17:02.261 { 00:17:02.261 "name": "BaseBdev1", 00:17:02.261 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:02.261 "is_configured": true, 00:17:02.261 "data_offset": 2048, 00:17:02.261 "data_size": 63488 00:17:02.261 }, 00:17:02.261 { 00:17:02.261 "name": "BaseBdev2", 00:17:02.261 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:02.261 "is_configured": true, 00:17:02.261 "data_offset": 2048, 00:17:02.261 "data_size": 63488 00:17:02.261 }, 00:17:02.261 { 00:17:02.261 "name": "BaseBdev3", 00:17:02.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.261 "is_configured": false, 00:17:02.261 "data_offset": 0, 00:17:02.261 "data_size": 0 00:17:02.261 }, 00:17:02.261 { 00:17:02.261 "name": "BaseBdev4", 00:17:02.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.261 "is_configured": false, 00:17:02.262 "data_offset": 0, 00:17:02.262 "data_size": 0 00:17:02.262 } 00:17:02.262 ] 00:17:02.262 }' 00:17:02.262 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.262 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.519 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:02.519 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.519 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.777 [2024-12-06 13:11:09.097079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.778 BaseBdev3 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.778 [ 00:17:02.778 { 00:17:02.778 "name": "BaseBdev3", 00:17:02.778 "aliases": [ 00:17:02.778 "eef73b52-ef3c-4df3-9073-b93033736599" 00:17:02.778 ], 00:17:02.778 "product_name": "Malloc disk", 00:17:02.778 "block_size": 512, 00:17:02.778 "num_blocks": 65536, 00:17:02.778 "uuid": "eef73b52-ef3c-4df3-9073-b93033736599", 00:17:02.778 "assigned_rate_limits": { 00:17:02.778 "rw_ios_per_sec": 0, 00:17:02.778 "rw_mbytes_per_sec": 0, 00:17:02.778 "r_mbytes_per_sec": 0, 00:17:02.778 "w_mbytes_per_sec": 0 00:17:02.778 }, 00:17:02.778 "claimed": true, 00:17:02.778 "claim_type": "exclusive_write", 00:17:02.778 "zoned": false, 00:17:02.778 "supported_io_types": { 00:17:02.778 "read": true, 00:17:02.778 "write": true, 00:17:02.778 "unmap": true, 00:17:02.778 "flush": true, 00:17:02.778 "reset": true, 00:17:02.778 "nvme_admin": false, 00:17:02.778 "nvme_io": false, 00:17:02.778 "nvme_io_md": false, 00:17:02.778 "write_zeroes": true, 00:17:02.778 "zcopy": true, 00:17:02.778 "get_zone_info": false, 00:17:02.778 "zone_management": false, 00:17:02.778 "zone_append": false, 00:17:02.778 "compare": false, 00:17:02.778 "compare_and_write": false, 00:17:02.778 "abort": true, 00:17:02.778 "seek_hole": false, 00:17:02.778 "seek_data": false, 00:17:02.778 "copy": true, 00:17:02.778 "nvme_iov_md": false 00:17:02.778 }, 00:17:02.778 "memory_domains": [ 00:17:02.778 { 00:17:02.778 "dma_device_id": "system", 00:17:02.778 "dma_device_type": 1 00:17:02.778 }, 00:17:02.778 { 00:17:02.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.778 "dma_device_type": 2 00:17:02.778 } 00:17:02.778 ], 00:17:02.778 "driver_specific": {} 00:17:02.778 } 00:17:02.778 ] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.778 "name": "Existed_Raid", 00:17:02.778 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:02.778 "strip_size_kb": 64, 00:17:02.778 "state": "configuring", 00:17:02.778 "raid_level": "raid0", 00:17:02.778 "superblock": true, 00:17:02.778 "num_base_bdevs": 4, 00:17:02.778 "num_base_bdevs_discovered": 3, 00:17:02.778 "num_base_bdevs_operational": 4, 00:17:02.778 "base_bdevs_list": [ 00:17:02.778 { 00:17:02.778 "name": "BaseBdev1", 00:17:02.778 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:02.778 "is_configured": true, 00:17:02.778 "data_offset": 2048, 00:17:02.778 "data_size": 63488 00:17:02.778 }, 00:17:02.778 { 00:17:02.778 "name": "BaseBdev2", 00:17:02.778 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:02.778 "is_configured": true, 00:17:02.778 "data_offset": 2048, 00:17:02.778 "data_size": 63488 00:17:02.778 }, 00:17:02.778 { 00:17:02.778 "name": "BaseBdev3", 00:17:02.778 "uuid": "eef73b52-ef3c-4df3-9073-b93033736599", 00:17:02.778 "is_configured": true, 00:17:02.778 "data_offset": 2048, 00:17:02.778 "data_size": 63488 00:17:02.778 }, 00:17:02.778 { 00:17:02.778 "name": "BaseBdev4", 00:17:02.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.778 "is_configured": false, 00:17:02.778 "data_offset": 0, 00:17:02.778 "data_size": 0 00:17:02.778 } 00:17:02.778 ] 00:17:02.778 }' 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.778 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.345 [2024-12-06 13:11:09.741920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.345 [2024-12-06 13:11:09.742372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.345 [2024-12-06 13:11:09.742393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.345 [2024-12-06 13:11:09.742779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.345 BaseBdev4 00:17:03.345 [2024-12-06 13:11:09.742983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.345 [2024-12-06 13:11:09.743006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:03.345 [2024-12-06 13:11:09.743199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.345 [ 00:17:03.345 { 00:17:03.345 "name": "BaseBdev4", 00:17:03.345 "aliases": [ 00:17:03.345 "b4cbbce0-d55d-42f0-af5d-447576a040bb" 00:17:03.345 ], 00:17:03.345 "product_name": "Malloc disk", 00:17:03.345 "block_size": 512, 00:17:03.345 "num_blocks": 65536, 00:17:03.345 "uuid": "b4cbbce0-d55d-42f0-af5d-447576a040bb", 00:17:03.345 "assigned_rate_limits": { 00:17:03.345 "rw_ios_per_sec": 0, 00:17:03.345 "rw_mbytes_per_sec": 0, 00:17:03.345 "r_mbytes_per_sec": 0, 00:17:03.345 "w_mbytes_per_sec": 0 00:17:03.345 }, 00:17:03.345 "claimed": true, 00:17:03.345 "claim_type": "exclusive_write", 00:17:03.345 "zoned": false, 00:17:03.345 "supported_io_types": { 00:17:03.345 "read": true, 00:17:03.345 "write": true, 00:17:03.345 "unmap": true, 00:17:03.345 "flush": true, 00:17:03.345 "reset": true, 00:17:03.345 "nvme_admin": false, 00:17:03.345 "nvme_io": false, 00:17:03.345 "nvme_io_md": false, 00:17:03.345 "write_zeroes": true, 00:17:03.345 "zcopy": true, 00:17:03.345 "get_zone_info": false, 00:17:03.345 "zone_management": false, 00:17:03.345 "zone_append": false, 00:17:03.345 "compare": false, 00:17:03.345 "compare_and_write": false, 00:17:03.345 "abort": true, 00:17:03.345 "seek_hole": false, 00:17:03.345 "seek_data": false, 00:17:03.345 "copy": true, 00:17:03.345 "nvme_iov_md": false 00:17:03.345 }, 00:17:03.345 "memory_domains": [ 00:17:03.345 { 00:17:03.345 "dma_device_id": "system", 00:17:03.345 "dma_device_type": 1 00:17:03.345 }, 00:17:03.345 { 00:17:03.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.345 "dma_device_type": 2 00:17:03.345 } 00:17:03.345 ], 00:17:03.345 "driver_specific": {} 00:17:03.345 } 00:17:03.345 ] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.345 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.345 "name": "Existed_Raid", 00:17:03.345 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:03.345 "strip_size_kb": 64, 00:17:03.345 "state": "online", 00:17:03.345 "raid_level": "raid0", 00:17:03.345 "superblock": true, 00:17:03.345 "num_base_bdevs": 4, 00:17:03.345 "num_base_bdevs_discovered": 4, 00:17:03.345 "num_base_bdevs_operational": 4, 00:17:03.345 "base_bdevs_list": [ 00:17:03.345 { 00:17:03.346 "name": "BaseBdev1", 00:17:03.346 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:03.346 "is_configured": true, 00:17:03.346 "data_offset": 2048, 00:17:03.346 "data_size": 63488 00:17:03.346 }, 00:17:03.346 { 00:17:03.346 "name": "BaseBdev2", 00:17:03.346 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:03.346 "is_configured": true, 00:17:03.346 "data_offset": 2048, 00:17:03.346 "data_size": 63488 00:17:03.346 }, 00:17:03.346 { 00:17:03.346 "name": "BaseBdev3", 00:17:03.346 "uuid": "eef73b52-ef3c-4df3-9073-b93033736599", 00:17:03.346 "is_configured": true, 00:17:03.346 "data_offset": 2048, 00:17:03.346 "data_size": 63488 00:17:03.346 }, 00:17:03.346 { 00:17:03.346 "name": "BaseBdev4", 00:17:03.346 "uuid": "b4cbbce0-d55d-42f0-af5d-447576a040bb", 00:17:03.346 "is_configured": true, 00:17:03.346 "data_offset": 2048, 00:17:03.346 "data_size": 63488 00:17:03.346 } 00:17:03.346 ] 00:17:03.346 }' 00:17:03.346 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.346 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.952 [2024-12-06 13:11:10.306700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.952 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.952 "name": "Existed_Raid", 00:17:03.952 "aliases": [ 00:17:03.952 "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8" 00:17:03.952 ], 00:17:03.952 "product_name": "Raid Volume", 00:17:03.952 "block_size": 512, 00:17:03.952 "num_blocks": 253952, 00:17:03.952 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:03.952 "assigned_rate_limits": { 00:17:03.952 "rw_ios_per_sec": 0, 00:17:03.952 "rw_mbytes_per_sec": 0, 00:17:03.952 "r_mbytes_per_sec": 0, 00:17:03.952 "w_mbytes_per_sec": 0 00:17:03.952 }, 00:17:03.952 "claimed": false, 00:17:03.952 "zoned": false, 00:17:03.952 "supported_io_types": { 00:17:03.952 "read": true, 00:17:03.952 "write": true, 00:17:03.952 "unmap": true, 00:17:03.952 "flush": true, 00:17:03.952 "reset": true, 00:17:03.952 "nvme_admin": false, 00:17:03.952 "nvme_io": false, 00:17:03.952 "nvme_io_md": false, 00:17:03.952 "write_zeroes": true, 00:17:03.952 "zcopy": false, 00:17:03.952 "get_zone_info": false, 00:17:03.952 "zone_management": false, 00:17:03.952 "zone_append": false, 00:17:03.952 "compare": false, 00:17:03.952 "compare_and_write": false, 00:17:03.952 "abort": false, 00:17:03.952 "seek_hole": false, 00:17:03.952 "seek_data": false, 00:17:03.952 "copy": false, 00:17:03.952 "nvme_iov_md": false 00:17:03.952 }, 00:17:03.952 "memory_domains": [ 00:17:03.952 { 00:17:03.952 "dma_device_id": "system", 00:17:03.952 "dma_device_type": 1 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.952 "dma_device_type": 2 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "system", 00:17:03.952 "dma_device_type": 1 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.952 "dma_device_type": 2 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "system", 00:17:03.952 "dma_device_type": 1 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.952 "dma_device_type": 2 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "system", 00:17:03.952 "dma_device_type": 1 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.952 "dma_device_type": 2 00:17:03.952 } 00:17:03.952 ], 00:17:03.952 "driver_specific": { 00:17:03.952 "raid": { 00:17:03.952 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:03.952 "strip_size_kb": 64, 00:17:03.952 "state": "online", 00:17:03.952 "raid_level": "raid0", 00:17:03.952 "superblock": true, 00:17:03.952 "num_base_bdevs": 4, 00:17:03.952 "num_base_bdevs_discovered": 4, 00:17:03.952 "num_base_bdevs_operational": 4, 00:17:03.952 "base_bdevs_list": [ 00:17:03.952 { 00:17:03.952 "name": "BaseBdev1", 00:17:03.952 "uuid": "d38bc5b1-6742-4608-a416-9677f8438ff8", 00:17:03.952 "is_configured": true, 00:17:03.952 "data_offset": 2048, 00:17:03.952 "data_size": 63488 00:17:03.952 }, 00:17:03.952 { 00:17:03.952 "name": "BaseBdev2", 00:17:03.952 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:03.953 "is_configured": true, 00:17:03.953 "data_offset": 2048, 00:17:03.953 "data_size": 63488 00:17:03.953 }, 00:17:03.953 { 00:17:03.953 "name": "BaseBdev3", 00:17:03.953 "uuid": "eef73b52-ef3c-4df3-9073-b93033736599", 00:17:03.953 "is_configured": true, 00:17:03.953 "data_offset": 2048, 00:17:03.953 "data_size": 63488 00:17:03.953 }, 00:17:03.953 { 00:17:03.953 "name": "BaseBdev4", 00:17:03.953 "uuid": "b4cbbce0-d55d-42f0-af5d-447576a040bb", 00:17:03.953 "is_configured": true, 00:17:03.953 "data_offset": 2048, 00:17:03.953 "data_size": 63488 00:17:03.953 } 00:17:03.953 ] 00:17:03.953 } 00:17:03.953 } 00:17:03.953 }' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.953 BaseBdev2 00:17:03.953 BaseBdev3 00:17:03.953 BaseBdev4' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.953 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.211 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.211 [2024-12-06 13:11:10.674426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.211 [2024-12-06 13:11:10.674485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.211 [2024-12-06 13:11:10.674575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.469 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.470 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.470 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.470 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.470 "name": "Existed_Raid", 00:17:04.470 "uuid": "f8f6fe7a-84b3-4c15-b703-6ad5c1b811a8", 00:17:04.470 "strip_size_kb": 64, 00:17:04.470 "state": "offline", 00:17:04.470 "raid_level": "raid0", 00:17:04.470 "superblock": true, 00:17:04.470 "num_base_bdevs": 4, 00:17:04.470 "num_base_bdevs_discovered": 3, 00:17:04.470 "num_base_bdevs_operational": 3, 00:17:04.470 "base_bdevs_list": [ 00:17:04.470 { 00:17:04.470 "name": null, 00:17:04.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.470 "is_configured": false, 00:17:04.470 "data_offset": 0, 00:17:04.470 "data_size": 63488 00:17:04.470 }, 00:17:04.470 { 00:17:04.470 "name": "BaseBdev2", 00:17:04.470 "uuid": "1182761e-d253-4810-abc8-7cbe3a7566f0", 00:17:04.470 "is_configured": true, 00:17:04.470 "data_offset": 2048, 00:17:04.470 "data_size": 63488 00:17:04.470 }, 00:17:04.470 { 00:17:04.470 "name": "BaseBdev3", 00:17:04.470 "uuid": "eef73b52-ef3c-4df3-9073-b93033736599", 00:17:04.470 "is_configured": true, 00:17:04.470 "data_offset": 2048, 00:17:04.470 "data_size": 63488 00:17:04.470 }, 00:17:04.470 { 00:17:04.470 "name": "BaseBdev4", 00:17:04.470 "uuid": "b4cbbce0-d55d-42f0-af5d-447576a040bb", 00:17:04.470 "is_configured": true, 00:17:04.470 "data_offset": 2048, 00:17:04.470 "data_size": 63488 00:17:04.470 } 00:17:04.470 ] 00:17:04.470 }' 00:17:04.470 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.470 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 [2024-12-06 13:11:11.389005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.036 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.036 [2024-12-06 13:11:11.549101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 [2024-12-06 13:11:11.704807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:05.295 [2024-12-06 13:11:11.704881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 BaseBdev2 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.554 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 [ 00:17:05.554 { 00:17:05.554 "name": "BaseBdev2", 00:17:05.554 "aliases": [ 00:17:05.554 "8238f56f-bda8-454e-a9d6-209eb0ad41d5" 00:17:05.554 ], 00:17:05.554 "product_name": "Malloc disk", 00:17:05.554 "block_size": 512, 00:17:05.554 "num_blocks": 65536, 00:17:05.554 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:05.554 "assigned_rate_limits": { 00:17:05.554 "rw_ios_per_sec": 0, 00:17:05.554 "rw_mbytes_per_sec": 0, 00:17:05.554 "r_mbytes_per_sec": 0, 00:17:05.554 "w_mbytes_per_sec": 0 00:17:05.554 }, 00:17:05.554 "claimed": false, 00:17:05.554 "zoned": false, 00:17:05.554 "supported_io_types": { 00:17:05.554 "read": true, 00:17:05.554 "write": true, 00:17:05.554 "unmap": true, 00:17:05.554 "flush": true, 00:17:05.554 "reset": true, 00:17:05.554 "nvme_admin": false, 00:17:05.554 "nvme_io": false, 00:17:05.554 "nvme_io_md": false, 00:17:05.554 "write_zeroes": true, 00:17:05.554 "zcopy": true, 00:17:05.554 "get_zone_info": false, 00:17:05.554 "zone_management": false, 00:17:05.554 "zone_append": false, 00:17:05.554 "compare": false, 00:17:05.554 "compare_and_write": false, 00:17:05.554 "abort": true, 00:17:05.554 "seek_hole": false, 00:17:05.554 "seek_data": false, 00:17:05.554 "copy": true, 00:17:05.554 "nvme_iov_md": false 00:17:05.554 }, 00:17:05.554 "memory_domains": [ 00:17:05.554 { 00:17:05.554 "dma_device_id": "system", 00:17:05.554 "dma_device_type": 1 00:17:05.554 }, 00:17:05.554 { 00:17:05.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.554 "dma_device_type": 2 00:17:05.554 } 00:17:05.555 ], 00:17:05.555 "driver_specific": {} 00:17:05.555 } 00:17:05.555 ] 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 BaseBdev3 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 [ 00:17:05.555 { 00:17:05.555 "name": "BaseBdev3", 00:17:05.555 "aliases": [ 00:17:05.555 "049c6264-24e7-43ae-ab69-a35d43883e40" 00:17:05.555 ], 00:17:05.555 "product_name": "Malloc disk", 00:17:05.555 "block_size": 512, 00:17:05.555 "num_blocks": 65536, 00:17:05.555 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:05.555 "assigned_rate_limits": { 00:17:05.555 "rw_ios_per_sec": 0, 00:17:05.555 "rw_mbytes_per_sec": 0, 00:17:05.555 "r_mbytes_per_sec": 0, 00:17:05.555 "w_mbytes_per_sec": 0 00:17:05.555 }, 00:17:05.555 "claimed": false, 00:17:05.555 "zoned": false, 00:17:05.555 "supported_io_types": { 00:17:05.555 "read": true, 00:17:05.555 "write": true, 00:17:05.555 "unmap": true, 00:17:05.555 "flush": true, 00:17:05.555 "reset": true, 00:17:05.555 "nvme_admin": false, 00:17:05.555 "nvme_io": false, 00:17:05.555 "nvme_io_md": false, 00:17:05.555 "write_zeroes": true, 00:17:05.555 "zcopy": true, 00:17:05.555 "get_zone_info": false, 00:17:05.555 "zone_management": false, 00:17:05.555 "zone_append": false, 00:17:05.555 "compare": false, 00:17:05.555 "compare_and_write": false, 00:17:05.555 "abort": true, 00:17:05.555 "seek_hole": false, 00:17:05.555 "seek_data": false, 00:17:05.555 "copy": true, 00:17:05.555 "nvme_iov_md": false 00:17:05.555 }, 00:17:05.555 "memory_domains": [ 00:17:05.555 { 00:17:05.555 "dma_device_id": "system", 00:17:05.555 "dma_device_type": 1 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.555 "dma_device_type": 2 00:17:05.555 } 00:17:05.555 ], 00:17:05.555 "driver_specific": {} 00:17:05.555 } 00:17:05.555 ] 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 BaseBdev4 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.814 [ 00:17:05.814 { 00:17:05.814 "name": "BaseBdev4", 00:17:05.814 "aliases": [ 00:17:05.814 "c4e6db93-fb1c-4522-9c87-e72229f01a52" 00:17:05.814 ], 00:17:05.814 "product_name": "Malloc disk", 00:17:05.814 "block_size": 512, 00:17:05.814 "num_blocks": 65536, 00:17:05.814 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:05.814 "assigned_rate_limits": { 00:17:05.814 "rw_ios_per_sec": 0, 00:17:05.814 "rw_mbytes_per_sec": 0, 00:17:05.814 "r_mbytes_per_sec": 0, 00:17:05.814 "w_mbytes_per_sec": 0 00:17:05.814 }, 00:17:05.814 "claimed": false, 00:17:05.814 "zoned": false, 00:17:05.814 "supported_io_types": { 00:17:05.814 "read": true, 00:17:05.814 "write": true, 00:17:05.814 "unmap": true, 00:17:05.814 "flush": true, 00:17:05.814 "reset": true, 00:17:05.814 "nvme_admin": false, 00:17:05.814 "nvme_io": false, 00:17:05.814 "nvme_io_md": false, 00:17:05.814 "write_zeroes": true, 00:17:05.814 "zcopy": true, 00:17:05.814 "get_zone_info": false, 00:17:05.814 "zone_management": false, 00:17:05.814 "zone_append": false, 00:17:05.814 "compare": false, 00:17:05.814 "compare_and_write": false, 00:17:05.814 "abort": true, 00:17:05.814 "seek_hole": false, 00:17:05.814 "seek_data": false, 00:17:05.814 "copy": true, 00:17:05.814 "nvme_iov_md": false 00:17:05.814 }, 00:17:05.814 "memory_domains": [ 00:17:05.814 { 00:17:05.814 "dma_device_id": "system", 00:17:05.814 "dma_device_type": 1 00:17:05.814 }, 00:17:05.814 { 00:17:05.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.814 "dma_device_type": 2 00:17:05.814 } 00:17:05.814 ], 00:17:05.814 "driver_specific": {} 00:17:05.814 } 00:17:05.814 ] 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.814 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.814 [2024-12-06 13:11:12.110966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.814 [2024-12-06 13:11:12.111196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.814 [2024-12-06 13:11:12.111358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.814 [2024-12-06 13:11:12.114226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.814 [2024-12-06 13:11:12.114467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.815 "name": "Existed_Raid", 00:17:05.815 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:05.815 "strip_size_kb": 64, 00:17:05.815 "state": "configuring", 00:17:05.815 "raid_level": "raid0", 00:17:05.815 "superblock": true, 00:17:05.815 "num_base_bdevs": 4, 00:17:05.815 "num_base_bdevs_discovered": 3, 00:17:05.815 "num_base_bdevs_operational": 4, 00:17:05.815 "base_bdevs_list": [ 00:17:05.815 { 00:17:05.815 "name": "BaseBdev1", 00:17:05.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.815 "is_configured": false, 00:17:05.815 "data_offset": 0, 00:17:05.815 "data_size": 0 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": "BaseBdev2", 00:17:05.815 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:05.815 "is_configured": true, 00:17:05.815 "data_offset": 2048, 00:17:05.815 "data_size": 63488 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": "BaseBdev3", 00:17:05.815 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:05.815 "is_configured": true, 00:17:05.815 "data_offset": 2048, 00:17:05.815 "data_size": 63488 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": "BaseBdev4", 00:17:05.815 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:05.815 "is_configured": true, 00:17:05.815 "data_offset": 2048, 00:17:05.815 "data_size": 63488 00:17:05.815 } 00:17:05.815 ] 00:17:05.815 }' 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.815 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 [2024-12-06 13:11:12.639105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.382 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.383 "name": "Existed_Raid", 00:17:06.383 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:06.383 "strip_size_kb": 64, 00:17:06.383 "state": "configuring", 00:17:06.383 "raid_level": "raid0", 00:17:06.383 "superblock": true, 00:17:06.383 "num_base_bdevs": 4, 00:17:06.383 "num_base_bdevs_discovered": 2, 00:17:06.383 "num_base_bdevs_operational": 4, 00:17:06.383 "base_bdevs_list": [ 00:17:06.383 { 00:17:06.383 "name": "BaseBdev1", 00:17:06.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.383 "is_configured": false, 00:17:06.383 "data_offset": 0, 00:17:06.383 "data_size": 0 00:17:06.383 }, 00:17:06.383 { 00:17:06.383 "name": null, 00:17:06.383 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:06.383 "is_configured": false, 00:17:06.383 "data_offset": 0, 00:17:06.383 "data_size": 63488 00:17:06.383 }, 00:17:06.383 { 00:17:06.383 "name": "BaseBdev3", 00:17:06.383 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:06.383 "is_configured": true, 00:17:06.383 "data_offset": 2048, 00:17:06.383 "data_size": 63488 00:17:06.383 }, 00:17:06.383 { 00:17:06.383 "name": "BaseBdev4", 00:17:06.383 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:06.383 "is_configured": true, 00:17:06.383 "data_offset": 2048, 00:17:06.383 "data_size": 63488 00:17:06.383 } 00:17:06.383 ] 00:17:06.383 }' 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.383 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.671 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.671 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.671 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.671 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:06.671 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 [2024-12-06 13:11:13.260424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.931 BaseBdev1 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 [ 00:17:06.931 { 00:17:06.931 "name": "BaseBdev1", 00:17:06.931 "aliases": [ 00:17:06.931 "360f431b-03a3-4844-bc10-c01cc6ba25ac" 00:17:06.931 ], 00:17:06.931 "product_name": "Malloc disk", 00:17:06.931 "block_size": 512, 00:17:06.931 "num_blocks": 65536, 00:17:06.931 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:06.931 "assigned_rate_limits": { 00:17:06.931 "rw_ios_per_sec": 0, 00:17:06.931 "rw_mbytes_per_sec": 0, 00:17:06.931 "r_mbytes_per_sec": 0, 00:17:06.931 "w_mbytes_per_sec": 0 00:17:06.931 }, 00:17:06.931 "claimed": true, 00:17:06.931 "claim_type": "exclusive_write", 00:17:06.931 "zoned": false, 00:17:06.931 "supported_io_types": { 00:17:06.931 "read": true, 00:17:06.931 "write": true, 00:17:06.931 "unmap": true, 00:17:06.931 "flush": true, 00:17:06.931 "reset": true, 00:17:06.931 "nvme_admin": false, 00:17:06.931 "nvme_io": false, 00:17:06.931 "nvme_io_md": false, 00:17:06.931 "write_zeroes": true, 00:17:06.931 "zcopy": true, 00:17:06.931 "get_zone_info": false, 00:17:06.931 "zone_management": false, 00:17:06.931 "zone_append": false, 00:17:06.931 "compare": false, 00:17:06.931 "compare_and_write": false, 00:17:06.931 "abort": true, 00:17:06.931 "seek_hole": false, 00:17:06.931 "seek_data": false, 00:17:06.931 "copy": true, 00:17:06.931 "nvme_iov_md": false 00:17:06.931 }, 00:17:06.931 "memory_domains": [ 00:17:06.931 { 00:17:06.931 "dma_device_id": "system", 00:17:06.931 "dma_device_type": 1 00:17:06.931 }, 00:17:06.931 { 00:17:06.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.931 "dma_device_type": 2 00:17:06.931 } 00:17:06.931 ], 00:17:06.931 "driver_specific": {} 00:17:06.931 } 00:17:06.931 ] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.931 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.931 "name": "Existed_Raid", 00:17:06.931 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:06.931 "strip_size_kb": 64, 00:17:06.931 "state": "configuring", 00:17:06.931 "raid_level": "raid0", 00:17:06.931 "superblock": true, 00:17:06.931 "num_base_bdevs": 4, 00:17:06.931 "num_base_bdevs_discovered": 3, 00:17:06.931 "num_base_bdevs_operational": 4, 00:17:06.931 "base_bdevs_list": [ 00:17:06.931 { 00:17:06.931 "name": "BaseBdev1", 00:17:06.931 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:06.931 "is_configured": true, 00:17:06.931 "data_offset": 2048, 00:17:06.931 "data_size": 63488 00:17:06.931 }, 00:17:06.931 { 00:17:06.931 "name": null, 00:17:06.931 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:06.931 "is_configured": false, 00:17:06.931 "data_offset": 0, 00:17:06.931 "data_size": 63488 00:17:06.931 }, 00:17:06.931 { 00:17:06.931 "name": "BaseBdev3", 00:17:06.931 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:06.932 "is_configured": true, 00:17:06.932 "data_offset": 2048, 00:17:06.932 "data_size": 63488 00:17:06.932 }, 00:17:06.932 { 00:17:06.932 "name": "BaseBdev4", 00:17:06.932 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:06.932 "is_configured": true, 00:17:06.932 "data_offset": 2048, 00:17:06.932 "data_size": 63488 00:17:06.932 } 00:17:06.932 ] 00:17:06.932 }' 00:17:06.932 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.932 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 [2024-12-06 13:11:13.856743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.501 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.501 "name": "Existed_Raid", 00:17:07.501 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:07.501 "strip_size_kb": 64, 00:17:07.501 "state": "configuring", 00:17:07.501 "raid_level": "raid0", 00:17:07.501 "superblock": true, 00:17:07.501 "num_base_bdevs": 4, 00:17:07.501 "num_base_bdevs_discovered": 2, 00:17:07.501 "num_base_bdevs_operational": 4, 00:17:07.501 "base_bdevs_list": [ 00:17:07.501 { 00:17:07.501 "name": "BaseBdev1", 00:17:07.501 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:07.501 "is_configured": true, 00:17:07.501 "data_offset": 2048, 00:17:07.501 "data_size": 63488 00:17:07.501 }, 00:17:07.501 { 00:17:07.501 "name": null, 00:17:07.501 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:07.501 "is_configured": false, 00:17:07.501 "data_offset": 0, 00:17:07.501 "data_size": 63488 00:17:07.501 }, 00:17:07.501 { 00:17:07.501 "name": null, 00:17:07.501 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:07.501 "is_configured": false, 00:17:07.501 "data_offset": 0, 00:17:07.501 "data_size": 63488 00:17:07.501 }, 00:17:07.501 { 00:17:07.501 "name": "BaseBdev4", 00:17:07.501 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:07.501 "is_configured": true, 00:17:07.501 "data_offset": 2048, 00:17:07.501 "data_size": 63488 00:17:07.501 } 00:17:07.501 ] 00:17:07.501 }' 00:17:07.502 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.502 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 [2024-12-06 13:11:14.396846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.078 "name": "Existed_Raid", 00:17:08.078 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:08.078 "strip_size_kb": 64, 00:17:08.078 "state": "configuring", 00:17:08.078 "raid_level": "raid0", 00:17:08.078 "superblock": true, 00:17:08.078 "num_base_bdevs": 4, 00:17:08.078 "num_base_bdevs_discovered": 3, 00:17:08.078 "num_base_bdevs_operational": 4, 00:17:08.078 "base_bdevs_list": [ 00:17:08.078 { 00:17:08.078 "name": "BaseBdev1", 00:17:08.078 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:08.078 "is_configured": true, 00:17:08.078 "data_offset": 2048, 00:17:08.078 "data_size": 63488 00:17:08.078 }, 00:17:08.078 { 00:17:08.078 "name": null, 00:17:08.078 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:08.078 "is_configured": false, 00:17:08.078 "data_offset": 0, 00:17:08.078 "data_size": 63488 00:17:08.078 }, 00:17:08.078 { 00:17:08.078 "name": "BaseBdev3", 00:17:08.078 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:08.078 "is_configured": true, 00:17:08.078 "data_offset": 2048, 00:17:08.078 "data_size": 63488 00:17:08.078 }, 00:17:08.078 { 00:17:08.078 "name": "BaseBdev4", 00:17:08.078 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:08.078 "is_configured": true, 00:17:08.078 "data_offset": 2048, 00:17:08.078 "data_size": 63488 00:17:08.078 } 00:17:08.078 ] 00:17:08.078 }' 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.078 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.646 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 [2024-12-06 13:11:14.989061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.646 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.646 "name": "Existed_Raid", 00:17:08.646 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:08.646 "strip_size_kb": 64, 00:17:08.646 "state": "configuring", 00:17:08.646 "raid_level": "raid0", 00:17:08.646 "superblock": true, 00:17:08.646 "num_base_bdevs": 4, 00:17:08.646 "num_base_bdevs_discovered": 2, 00:17:08.646 "num_base_bdevs_operational": 4, 00:17:08.646 "base_bdevs_list": [ 00:17:08.646 { 00:17:08.646 "name": null, 00:17:08.646 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:08.646 "is_configured": false, 00:17:08.646 "data_offset": 0, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": null, 00:17:08.646 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:08.646 "is_configured": false, 00:17:08.646 "data_offset": 0, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "BaseBdev3", 00:17:08.646 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.646 }, 00:17:08.646 { 00:17:08.646 "name": "BaseBdev4", 00:17:08.646 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:08.646 "is_configured": true, 00:17:08.646 "data_offset": 2048, 00:17:08.646 "data_size": 63488 00:17:08.647 } 00:17:08.647 ] 00:17:08.647 }' 00:17:08.647 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.647 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 [2024-12-06 13:11:15.690389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.215 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.473 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.473 "name": "Existed_Raid", 00:17:09.473 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:09.473 "strip_size_kb": 64, 00:17:09.473 "state": "configuring", 00:17:09.473 "raid_level": "raid0", 00:17:09.473 "superblock": true, 00:17:09.473 "num_base_bdevs": 4, 00:17:09.473 "num_base_bdevs_discovered": 3, 00:17:09.473 "num_base_bdevs_operational": 4, 00:17:09.473 "base_bdevs_list": [ 00:17:09.473 { 00:17:09.473 "name": null, 00:17:09.473 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:09.473 "is_configured": false, 00:17:09.473 "data_offset": 0, 00:17:09.473 "data_size": 63488 00:17:09.473 }, 00:17:09.473 { 00:17:09.473 "name": "BaseBdev2", 00:17:09.473 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:09.473 "is_configured": true, 00:17:09.473 "data_offset": 2048, 00:17:09.473 "data_size": 63488 00:17:09.473 }, 00:17:09.473 { 00:17:09.473 "name": "BaseBdev3", 00:17:09.473 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:09.473 "is_configured": true, 00:17:09.473 "data_offset": 2048, 00:17:09.473 "data_size": 63488 00:17:09.473 }, 00:17:09.473 { 00:17:09.473 "name": "BaseBdev4", 00:17:09.473 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:09.473 "is_configured": true, 00:17:09.473 "data_offset": 2048, 00:17:09.473 "data_size": 63488 00:17:09.473 } 00:17:09.473 ] 00:17:09.473 }' 00:17:09.473 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.473 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.731 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 360f431b-03a3-4844-bc10-c01cc6ba25ac 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.990 [2024-12-06 13:11:16.311972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:09.990 [2024-12-06 13:11:16.312285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.990 [2024-12-06 13:11:16.312304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.990 [2024-12-06 13:11:16.312668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.990 NewBaseBdev 00:17:09.990 [2024-12-06 13:11:16.312846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.990 [2024-12-06 13:11:16.312867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:09.990 [2024-12-06 13:11:16.313024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.990 [ 00:17:09.990 { 00:17:09.990 "name": "NewBaseBdev", 00:17:09.990 "aliases": [ 00:17:09.990 "360f431b-03a3-4844-bc10-c01cc6ba25ac" 00:17:09.990 ], 00:17:09.990 "product_name": "Malloc disk", 00:17:09.990 "block_size": 512, 00:17:09.990 "num_blocks": 65536, 00:17:09.990 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:09.990 "assigned_rate_limits": { 00:17:09.990 "rw_ios_per_sec": 0, 00:17:09.990 "rw_mbytes_per_sec": 0, 00:17:09.990 "r_mbytes_per_sec": 0, 00:17:09.990 "w_mbytes_per_sec": 0 00:17:09.990 }, 00:17:09.990 "claimed": true, 00:17:09.990 "claim_type": "exclusive_write", 00:17:09.990 "zoned": false, 00:17:09.990 "supported_io_types": { 00:17:09.990 "read": true, 00:17:09.990 "write": true, 00:17:09.990 "unmap": true, 00:17:09.990 "flush": true, 00:17:09.990 "reset": true, 00:17:09.990 "nvme_admin": false, 00:17:09.990 "nvme_io": false, 00:17:09.990 "nvme_io_md": false, 00:17:09.990 "write_zeroes": true, 00:17:09.990 "zcopy": true, 00:17:09.990 "get_zone_info": false, 00:17:09.990 "zone_management": false, 00:17:09.990 "zone_append": false, 00:17:09.990 "compare": false, 00:17:09.990 "compare_and_write": false, 00:17:09.990 "abort": true, 00:17:09.990 "seek_hole": false, 00:17:09.990 "seek_data": false, 00:17:09.990 "copy": true, 00:17:09.990 "nvme_iov_md": false 00:17:09.990 }, 00:17:09.990 "memory_domains": [ 00:17:09.990 { 00:17:09.990 "dma_device_id": "system", 00:17:09.990 "dma_device_type": 1 00:17:09.990 }, 00:17:09.990 { 00:17:09.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.990 "dma_device_type": 2 00:17:09.990 } 00:17:09.990 ], 00:17:09.990 "driver_specific": {} 00:17:09.990 } 00:17:09.990 ] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.990 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.991 "name": "Existed_Raid", 00:17:09.991 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:09.991 "strip_size_kb": 64, 00:17:09.991 "state": "online", 00:17:09.991 "raid_level": "raid0", 00:17:09.991 "superblock": true, 00:17:09.991 "num_base_bdevs": 4, 00:17:09.991 "num_base_bdevs_discovered": 4, 00:17:09.991 "num_base_bdevs_operational": 4, 00:17:09.991 "base_bdevs_list": [ 00:17:09.991 { 00:17:09.991 "name": "NewBaseBdev", 00:17:09.991 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:09.991 "is_configured": true, 00:17:09.991 "data_offset": 2048, 00:17:09.991 "data_size": 63488 00:17:09.991 }, 00:17:09.991 { 00:17:09.991 "name": "BaseBdev2", 00:17:09.991 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:09.991 "is_configured": true, 00:17:09.991 "data_offset": 2048, 00:17:09.991 "data_size": 63488 00:17:09.991 }, 00:17:09.991 { 00:17:09.991 "name": "BaseBdev3", 00:17:09.991 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:09.991 "is_configured": true, 00:17:09.991 "data_offset": 2048, 00:17:09.991 "data_size": 63488 00:17:09.991 }, 00:17:09.991 { 00:17:09.991 "name": "BaseBdev4", 00:17:09.991 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:09.991 "is_configured": true, 00:17:09.991 "data_offset": 2048, 00:17:09.991 "data_size": 63488 00:17:09.991 } 00:17:09.991 ] 00:17:09.991 }' 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.991 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.558 [2024-12-06 13:11:16.864631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.558 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.558 "name": "Existed_Raid", 00:17:10.558 "aliases": [ 00:17:10.558 "c7d8a7dc-0cbe-4081-8764-8f6723052ce8" 00:17:10.558 ], 00:17:10.558 "product_name": "Raid Volume", 00:17:10.558 "block_size": 512, 00:17:10.558 "num_blocks": 253952, 00:17:10.558 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:10.558 "assigned_rate_limits": { 00:17:10.558 "rw_ios_per_sec": 0, 00:17:10.558 "rw_mbytes_per_sec": 0, 00:17:10.558 "r_mbytes_per_sec": 0, 00:17:10.558 "w_mbytes_per_sec": 0 00:17:10.559 }, 00:17:10.559 "claimed": false, 00:17:10.559 "zoned": false, 00:17:10.559 "supported_io_types": { 00:17:10.559 "read": true, 00:17:10.559 "write": true, 00:17:10.559 "unmap": true, 00:17:10.559 "flush": true, 00:17:10.559 "reset": true, 00:17:10.559 "nvme_admin": false, 00:17:10.559 "nvme_io": false, 00:17:10.559 "nvme_io_md": false, 00:17:10.559 "write_zeroes": true, 00:17:10.559 "zcopy": false, 00:17:10.559 "get_zone_info": false, 00:17:10.559 "zone_management": false, 00:17:10.559 "zone_append": false, 00:17:10.559 "compare": false, 00:17:10.559 "compare_and_write": false, 00:17:10.559 "abort": false, 00:17:10.559 "seek_hole": false, 00:17:10.559 "seek_data": false, 00:17:10.559 "copy": false, 00:17:10.559 "nvme_iov_md": false 00:17:10.559 }, 00:17:10.559 "memory_domains": [ 00:17:10.559 { 00:17:10.559 "dma_device_id": "system", 00:17:10.559 "dma_device_type": 1 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.559 "dma_device_type": 2 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "system", 00:17:10.559 "dma_device_type": 1 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.559 "dma_device_type": 2 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "system", 00:17:10.559 "dma_device_type": 1 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.559 "dma_device_type": 2 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "system", 00:17:10.559 "dma_device_type": 1 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.559 "dma_device_type": 2 00:17:10.559 } 00:17:10.559 ], 00:17:10.559 "driver_specific": { 00:17:10.559 "raid": { 00:17:10.559 "uuid": "c7d8a7dc-0cbe-4081-8764-8f6723052ce8", 00:17:10.559 "strip_size_kb": 64, 00:17:10.559 "state": "online", 00:17:10.559 "raid_level": "raid0", 00:17:10.559 "superblock": true, 00:17:10.559 "num_base_bdevs": 4, 00:17:10.559 "num_base_bdevs_discovered": 4, 00:17:10.559 "num_base_bdevs_operational": 4, 00:17:10.559 "base_bdevs_list": [ 00:17:10.559 { 00:17:10.559 "name": "NewBaseBdev", 00:17:10.559 "uuid": "360f431b-03a3-4844-bc10-c01cc6ba25ac", 00:17:10.559 "is_configured": true, 00:17:10.559 "data_offset": 2048, 00:17:10.559 "data_size": 63488 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "name": "BaseBdev2", 00:17:10.559 "uuid": "8238f56f-bda8-454e-a9d6-209eb0ad41d5", 00:17:10.559 "is_configured": true, 00:17:10.559 "data_offset": 2048, 00:17:10.559 "data_size": 63488 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "name": "BaseBdev3", 00:17:10.559 "uuid": "049c6264-24e7-43ae-ab69-a35d43883e40", 00:17:10.559 "is_configured": true, 00:17:10.559 "data_offset": 2048, 00:17:10.559 "data_size": 63488 00:17:10.559 }, 00:17:10.559 { 00:17:10.559 "name": "BaseBdev4", 00:17:10.559 "uuid": "c4e6db93-fb1c-4522-9c87-e72229f01a52", 00:17:10.559 "is_configured": true, 00:17:10.559 "data_offset": 2048, 00:17:10.559 "data_size": 63488 00:17:10.559 } 00:17:10.559 ] 00:17:10.559 } 00:17:10.559 } 00:17:10.559 }' 00:17:10.559 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.559 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:10.559 BaseBdev2 00:17:10.559 BaseBdev3 00:17:10.559 BaseBdev4' 00:17:10.559 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.559 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.818 [2024-12-06 13:11:17.240243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.818 [2024-12-06 13:11:17.241019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.818 [2024-12-06 13:11:17.241141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.818 [2024-12-06 13:11:17.241243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.818 [2024-12-06 13:11:17.241269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70420 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70420 ']' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70420 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70420 00:17:10.818 killing process with pid 70420 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70420' 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70420 00:17:10.818 [2024-12-06 13:11:17.280205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.818 13:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70420 00:17:11.385 [2024-12-06 13:11:17.659187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.321 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:12.321 00:17:12.321 real 0m13.026s 00:17:12.321 user 0m21.351s 00:17:12.321 sys 0m1.890s 00:17:12.321 ************************************ 00:17:12.321 END TEST raid_state_function_test_sb 00:17:12.321 ************************************ 00:17:12.321 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.321 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.321 13:11:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:12.321 13:11:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:12.321 13:11:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.321 13:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.580 ************************************ 00:17:12.580 START TEST raid_superblock_test 00:17:12.580 ************************************ 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:12.580 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71102 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71102 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71102 ']' 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.581 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.581 [2024-12-06 13:11:18.966827] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:12.581 [2024-12-06 13:11:18.967226] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71102 ] 00:17:12.840 [2024-12-06 13:11:19.158763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.840 [2024-12-06 13:11:19.328468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.099 [2024-12-06 13:11:19.562938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.099 [2024-12-06 13:11:19.563036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.665 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 malloc1 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 [2024-12-06 13:11:20.024888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.665 [2024-12-06 13:11:20.024973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.665 [2024-12-06 13:11:20.025009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.665 [2024-12-06 13:11:20.025026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.665 [2024-12-06 13:11:20.028113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.665 [2024-12-06 13:11:20.028312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.665 pt1 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.665 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.666 malloc2 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.666 [2024-12-06 13:11:20.076682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.666 [2024-12-06 13:11:20.076763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.666 [2024-12-06 13:11:20.076801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.666 [2024-12-06 13:11:20.076816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.666 [2024-12-06 13:11:20.079796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.666 [2024-12-06 13:11:20.079846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.666 pt2 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.666 malloc3 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.666 [2024-12-06 13:11:20.151591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.666 [2024-12-06 13:11:20.151677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.666 [2024-12-06 13:11:20.151719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.666 [2024-12-06 13:11:20.151739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.666 [2024-12-06 13:11:20.155367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.666 [2024-12-06 13:11:20.155423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.666 pt3 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.666 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.925 malloc4 00:17:13.925 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.925 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:13.925 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.925 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.925 [2024-12-06 13:11:20.214808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:13.925 [2024-12-06 13:11:20.214905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.925 [2024-12-06 13:11:20.214947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:13.925 [2024-12-06 13:11:20.214965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.926 [2024-12-06 13:11:20.218553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.926 [2024-12-06 13:11:20.218609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:13.926 pt4 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.926 [2024-12-06 13:11:20.226911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.926 [2024-12-06 13:11:20.229999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.926 [2024-12-06 13:11:20.230346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.926 [2024-12-06 13:11:20.230471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:13.926 [2024-12-06 13:11:20.230792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.926 [2024-12-06 13:11:20.230816] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:13.926 [2024-12-06 13:11:20.231249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:13.926 [2024-12-06 13:11:20.231553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.926 [2024-12-06 13:11:20.231587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.926 [2024-12-06 13:11:20.231877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.926 "name": "raid_bdev1", 00:17:13.926 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:13.926 "strip_size_kb": 64, 00:17:13.926 "state": "online", 00:17:13.926 "raid_level": "raid0", 00:17:13.926 "superblock": true, 00:17:13.926 "num_base_bdevs": 4, 00:17:13.926 "num_base_bdevs_discovered": 4, 00:17:13.926 "num_base_bdevs_operational": 4, 00:17:13.926 "base_bdevs_list": [ 00:17:13.926 { 00:17:13.926 "name": "pt1", 00:17:13.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.926 "is_configured": true, 00:17:13.926 "data_offset": 2048, 00:17:13.926 "data_size": 63488 00:17:13.926 }, 00:17:13.926 { 00:17:13.926 "name": "pt2", 00:17:13.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.926 "is_configured": true, 00:17:13.926 "data_offset": 2048, 00:17:13.926 "data_size": 63488 00:17:13.926 }, 00:17:13.926 { 00:17:13.926 "name": "pt3", 00:17:13.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.926 "is_configured": true, 00:17:13.926 "data_offset": 2048, 00:17:13.926 "data_size": 63488 00:17:13.926 }, 00:17:13.926 { 00:17:13.926 "name": "pt4", 00:17:13.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.926 "is_configured": true, 00:17:13.926 "data_offset": 2048, 00:17:13.926 "data_size": 63488 00:17:13.926 } 00:17:13.926 ] 00:17:13.926 }' 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.926 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.517 [2024-12-06 13:11:20.743542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.517 "name": "raid_bdev1", 00:17:14.517 "aliases": [ 00:17:14.517 "2831451b-88ba-4a51-a28c-eab364fc71ad" 00:17:14.517 ], 00:17:14.517 "product_name": "Raid Volume", 00:17:14.517 "block_size": 512, 00:17:14.517 "num_blocks": 253952, 00:17:14.517 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:14.517 "assigned_rate_limits": { 00:17:14.517 "rw_ios_per_sec": 0, 00:17:14.517 "rw_mbytes_per_sec": 0, 00:17:14.517 "r_mbytes_per_sec": 0, 00:17:14.517 "w_mbytes_per_sec": 0 00:17:14.517 }, 00:17:14.517 "claimed": false, 00:17:14.517 "zoned": false, 00:17:14.517 "supported_io_types": { 00:17:14.517 "read": true, 00:17:14.517 "write": true, 00:17:14.517 "unmap": true, 00:17:14.517 "flush": true, 00:17:14.517 "reset": true, 00:17:14.517 "nvme_admin": false, 00:17:14.517 "nvme_io": false, 00:17:14.517 "nvme_io_md": false, 00:17:14.517 "write_zeroes": true, 00:17:14.517 "zcopy": false, 00:17:14.517 "get_zone_info": false, 00:17:14.517 "zone_management": false, 00:17:14.517 "zone_append": false, 00:17:14.517 "compare": false, 00:17:14.517 "compare_and_write": false, 00:17:14.517 "abort": false, 00:17:14.517 "seek_hole": false, 00:17:14.517 "seek_data": false, 00:17:14.517 "copy": false, 00:17:14.517 "nvme_iov_md": false 00:17:14.517 }, 00:17:14.517 "memory_domains": [ 00:17:14.517 { 00:17:14.517 "dma_device_id": "system", 00:17:14.517 "dma_device_type": 1 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.517 "dma_device_type": 2 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "system", 00:17:14.517 "dma_device_type": 1 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.517 "dma_device_type": 2 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "system", 00:17:14.517 "dma_device_type": 1 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.517 "dma_device_type": 2 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "system", 00:17:14.517 "dma_device_type": 1 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.517 "dma_device_type": 2 00:17:14.517 } 00:17:14.517 ], 00:17:14.517 "driver_specific": { 00:17:14.517 "raid": { 00:17:14.517 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:14.517 "strip_size_kb": 64, 00:17:14.517 "state": "online", 00:17:14.517 "raid_level": "raid0", 00:17:14.517 "superblock": true, 00:17:14.517 "num_base_bdevs": 4, 00:17:14.517 "num_base_bdevs_discovered": 4, 00:17:14.517 "num_base_bdevs_operational": 4, 00:17:14.517 "base_bdevs_list": [ 00:17:14.517 { 00:17:14.517 "name": "pt1", 00:17:14.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.517 "is_configured": true, 00:17:14.517 "data_offset": 2048, 00:17:14.517 "data_size": 63488 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "name": "pt2", 00:17:14.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.517 "is_configured": true, 00:17:14.517 "data_offset": 2048, 00:17:14.517 "data_size": 63488 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "name": "pt3", 00:17:14.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.517 "is_configured": true, 00:17:14.517 "data_offset": 2048, 00:17:14.517 "data_size": 63488 00:17:14.517 }, 00:17:14.517 { 00:17:14.517 "name": "pt4", 00:17:14.517 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.517 "is_configured": true, 00:17:14.517 "data_offset": 2048, 00:17:14.517 "data_size": 63488 00:17:14.517 } 00:17:14.517 ] 00:17:14.517 } 00:17:14.517 } 00:17:14.517 }' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.517 pt2 00:17:14.517 pt3 00:17:14.517 pt4' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.517 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 [2024-12-06 13:11:21.191634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2831451b-88ba-4a51-a28c-eab364fc71ad 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2831451b-88ba-4a51-a28c-eab364fc71ad ']' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 [2024-12-06 13:11:21.243227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.777 [2024-12-06 13:11:21.243269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.777 [2024-12-06 13:11:21.243398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.777 [2024-12-06 13:11:21.243539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.777 [2024-12-06 13:11:21.243568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.777 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 [2024-12-06 13:11:21.403314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:15.037 [2024-12-06 13:11:21.406140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:15.037 [2024-12-06 13:11:21.406217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:15.037 [2024-12-06 13:11:21.406294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:15.037 [2024-12-06 13:11:21.406379] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:15.037 [2024-12-06 13:11:21.406487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:15.037 [2024-12-06 13:11:21.406525] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:15.037 [2024-12-06 13:11:21.406557] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:15.037 [2024-12-06 13:11:21.406579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.037 [2024-12-06 13:11:21.406598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:15.037 request: 00:17:15.037 { 00:17:15.037 "name": "raid_bdev1", 00:17:15.037 "raid_level": "raid0", 00:17:15.037 "base_bdevs": [ 00:17:15.037 "malloc1", 00:17:15.037 "malloc2", 00:17:15.037 "malloc3", 00:17:15.037 "malloc4" 00:17:15.037 ], 00:17:15.037 "strip_size_kb": 64, 00:17:15.037 "superblock": false, 00:17:15.037 "method": "bdev_raid_create", 00:17:15.037 "req_id": 1 00:17:15.037 } 00:17:15.037 Got JSON-RPC error response 00:17:15.037 response: 00:17:15.037 { 00:17:15.037 "code": -17, 00:17:15.037 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:15.037 } 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.037 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.037 [2024-12-06 13:11:21.467388] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.037 [2024-12-06 13:11:21.467653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.037 [2024-12-06 13:11:21.467706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:15.037 [2024-12-06 13:11:21.467727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.038 [2024-12-06 13:11:21.470959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.038 [2024-12-06 13:11:21.471143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.038 [2024-12-06 13:11:21.471287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:15.038 [2024-12-06 13:11:21.471375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:15.038 pt1 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.038 "name": "raid_bdev1", 00:17:15.038 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:15.038 "strip_size_kb": 64, 00:17:15.038 "state": "configuring", 00:17:15.038 "raid_level": "raid0", 00:17:15.038 "superblock": true, 00:17:15.038 "num_base_bdevs": 4, 00:17:15.038 "num_base_bdevs_discovered": 1, 00:17:15.038 "num_base_bdevs_operational": 4, 00:17:15.038 "base_bdevs_list": [ 00:17:15.038 { 00:17:15.038 "name": "pt1", 00:17:15.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.038 "is_configured": true, 00:17:15.038 "data_offset": 2048, 00:17:15.038 "data_size": 63488 00:17:15.038 }, 00:17:15.038 { 00:17:15.038 "name": null, 00:17:15.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.038 "is_configured": false, 00:17:15.038 "data_offset": 2048, 00:17:15.038 "data_size": 63488 00:17:15.038 }, 00:17:15.038 { 00:17:15.038 "name": null, 00:17:15.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.038 "is_configured": false, 00:17:15.038 "data_offset": 2048, 00:17:15.038 "data_size": 63488 00:17:15.038 }, 00:17:15.038 { 00:17:15.038 "name": null, 00:17:15.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.038 "is_configured": false, 00:17:15.038 "data_offset": 2048, 00:17:15.038 "data_size": 63488 00:17:15.038 } 00:17:15.038 ] 00:17:15.038 }' 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.038 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 [2024-12-06 13:11:22.035586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.606 [2024-12-06 13:11:22.035695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.606 [2024-12-06 13:11:22.035731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:15.606 [2024-12-06 13:11:22.035751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.606 [2024-12-06 13:11:22.036377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.606 [2024-12-06 13:11:22.036416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.606 [2024-12-06 13:11:22.036551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.606 [2024-12-06 13:11:22.036596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.606 pt2 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 [2024-12-06 13:11:22.043549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.606 "name": "raid_bdev1", 00:17:15.606 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:15.606 "strip_size_kb": 64, 00:17:15.606 "state": "configuring", 00:17:15.606 "raid_level": "raid0", 00:17:15.606 "superblock": true, 00:17:15.606 "num_base_bdevs": 4, 00:17:15.606 "num_base_bdevs_discovered": 1, 00:17:15.606 "num_base_bdevs_operational": 4, 00:17:15.606 "base_bdevs_list": [ 00:17:15.606 { 00:17:15.606 "name": "pt1", 00:17:15.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.606 "is_configured": true, 00:17:15.606 "data_offset": 2048, 00:17:15.606 "data_size": 63488 00:17:15.606 }, 00:17:15.606 { 00:17:15.606 "name": null, 00:17:15.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.606 "is_configured": false, 00:17:15.606 "data_offset": 0, 00:17:15.606 "data_size": 63488 00:17:15.606 }, 00:17:15.606 { 00:17:15.606 "name": null, 00:17:15.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.606 "is_configured": false, 00:17:15.606 "data_offset": 2048, 00:17:15.606 "data_size": 63488 00:17:15.606 }, 00:17:15.606 { 00:17:15.606 "name": null, 00:17:15.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.606 "is_configured": false, 00:17:15.606 "data_offset": 2048, 00:17:15.606 "data_size": 63488 00:17:15.606 } 00:17:15.606 ] 00:17:15.606 }' 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.606 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 [2024-12-06 13:11:22.595748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.174 [2024-12-06 13:11:22.595853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.174 [2024-12-06 13:11:22.595890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:16.174 [2024-12-06 13:11:22.595907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.174 [2024-12-06 13:11:22.596597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.174 [2024-12-06 13:11:22.597084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.174 [2024-12-06 13:11:22.597428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.174 [2024-12-06 13:11:22.597570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.174 pt2 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 [2024-12-06 13:11:22.603783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.174 [2024-12-06 13:11:22.604132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.174 [2024-12-06 13:11:22.604199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:16.174 [2024-12-06 13:11:22.604226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.174 [2024-12-06 13:11:22.605121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.174 [2024-12-06 13:11:22.605191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.174 [2024-12-06 13:11:22.605344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.174 [2024-12-06 13:11:22.605408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.174 pt3 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 [2024-12-06 13:11:22.611716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:16.174 [2024-12-06 13:11:22.611806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.174 [2024-12-06 13:11:22.611850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:16.174 [2024-12-06 13:11:22.611879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.174 [2024-12-06 13:11:22.612644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.174 [2024-12-06 13:11:22.612712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:16.174 [2024-12-06 13:11:22.612853] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:16.174 [2024-12-06 13:11:22.612904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:16.174 [2024-12-06 13:11:22.613203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:16.174 [2024-12-06 13:11:22.613244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:16.174 [2024-12-06 13:11:22.613794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:16.174 [2024-12-06 13:11:22.614138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:16.174 [2024-12-06 13:11:22.614179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:16.174 [2024-12-06 13:11:22.614517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.174 pt4 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.174 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.174 "name": "raid_bdev1", 00:17:16.174 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:16.174 "strip_size_kb": 64, 00:17:16.174 "state": "online", 00:17:16.174 "raid_level": "raid0", 00:17:16.174 "superblock": true, 00:17:16.174 "num_base_bdevs": 4, 00:17:16.174 "num_base_bdevs_discovered": 4, 00:17:16.174 "num_base_bdevs_operational": 4, 00:17:16.174 "base_bdevs_list": [ 00:17:16.174 { 00:17:16.174 "name": "pt1", 00:17:16.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.174 "is_configured": true, 00:17:16.174 "data_offset": 2048, 00:17:16.174 "data_size": 63488 00:17:16.174 }, 00:17:16.174 { 00:17:16.174 "name": "pt2", 00:17:16.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.175 "is_configured": true, 00:17:16.175 "data_offset": 2048, 00:17:16.175 "data_size": 63488 00:17:16.175 }, 00:17:16.175 { 00:17:16.175 "name": "pt3", 00:17:16.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.175 "is_configured": true, 00:17:16.175 "data_offset": 2048, 00:17:16.175 "data_size": 63488 00:17:16.175 }, 00:17:16.175 { 00:17:16.175 "name": "pt4", 00:17:16.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.175 "is_configured": true, 00:17:16.175 "data_offset": 2048, 00:17:16.175 "data_size": 63488 00:17:16.175 } 00:17:16.175 ] 00:17:16.175 }' 00:17:16.175 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.175 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.742 [2024-12-06 13:11:23.156573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.742 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.742 "name": "raid_bdev1", 00:17:16.742 "aliases": [ 00:17:16.742 "2831451b-88ba-4a51-a28c-eab364fc71ad" 00:17:16.742 ], 00:17:16.742 "product_name": "Raid Volume", 00:17:16.742 "block_size": 512, 00:17:16.742 "num_blocks": 253952, 00:17:16.742 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:16.742 "assigned_rate_limits": { 00:17:16.742 "rw_ios_per_sec": 0, 00:17:16.742 "rw_mbytes_per_sec": 0, 00:17:16.742 "r_mbytes_per_sec": 0, 00:17:16.742 "w_mbytes_per_sec": 0 00:17:16.742 }, 00:17:16.742 "claimed": false, 00:17:16.742 "zoned": false, 00:17:16.742 "supported_io_types": { 00:17:16.742 "read": true, 00:17:16.742 "write": true, 00:17:16.742 "unmap": true, 00:17:16.742 "flush": true, 00:17:16.742 "reset": true, 00:17:16.742 "nvme_admin": false, 00:17:16.742 "nvme_io": false, 00:17:16.742 "nvme_io_md": false, 00:17:16.742 "write_zeroes": true, 00:17:16.742 "zcopy": false, 00:17:16.742 "get_zone_info": false, 00:17:16.742 "zone_management": false, 00:17:16.742 "zone_append": false, 00:17:16.742 "compare": false, 00:17:16.742 "compare_and_write": false, 00:17:16.742 "abort": false, 00:17:16.742 "seek_hole": false, 00:17:16.742 "seek_data": false, 00:17:16.742 "copy": false, 00:17:16.742 "nvme_iov_md": false 00:17:16.742 }, 00:17:16.742 "memory_domains": [ 00:17:16.742 { 00:17:16.742 "dma_device_id": "system", 00:17:16.742 "dma_device_type": 1 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.742 "dma_device_type": 2 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "system", 00:17:16.742 "dma_device_type": 1 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.742 "dma_device_type": 2 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "system", 00:17:16.742 "dma_device_type": 1 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.742 "dma_device_type": 2 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "system", 00:17:16.742 "dma_device_type": 1 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.742 "dma_device_type": 2 00:17:16.742 } 00:17:16.742 ], 00:17:16.742 "driver_specific": { 00:17:16.742 "raid": { 00:17:16.742 "uuid": "2831451b-88ba-4a51-a28c-eab364fc71ad", 00:17:16.742 "strip_size_kb": 64, 00:17:16.742 "state": "online", 00:17:16.742 "raid_level": "raid0", 00:17:16.742 "superblock": true, 00:17:16.742 "num_base_bdevs": 4, 00:17:16.742 "num_base_bdevs_discovered": 4, 00:17:16.742 "num_base_bdevs_operational": 4, 00:17:16.742 "base_bdevs_list": [ 00:17:16.742 { 00:17:16.742 "name": "pt1", 00:17:16.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.742 "is_configured": true, 00:17:16.742 "data_offset": 2048, 00:17:16.742 "data_size": 63488 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "name": "pt2", 00:17:16.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.742 "is_configured": true, 00:17:16.742 "data_offset": 2048, 00:17:16.742 "data_size": 63488 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "name": "pt3", 00:17:16.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.742 "is_configured": true, 00:17:16.742 "data_offset": 2048, 00:17:16.742 "data_size": 63488 00:17:16.742 }, 00:17:16.742 { 00:17:16.742 "name": "pt4", 00:17:16.743 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.743 "is_configured": true, 00:17:16.743 "data_offset": 2048, 00:17:16.743 "data_size": 63488 00:17:16.743 } 00:17:16.743 ] 00:17:16.743 } 00:17:16.743 } 00:17:16.743 }' 00:17:16.743 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.743 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:16.743 pt2 00:17:16.743 pt3 00:17:16.743 pt4' 00:17:16.743 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.001 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:17.260 [2024-12-06 13:11:23.552552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2831451b-88ba-4a51-a28c-eab364fc71ad '!=' 2831451b-88ba-4a51-a28c-eab364fc71ad ']' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71102 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71102 ']' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71102 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71102 00:17:17.260 killing process with pid 71102 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71102' 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71102 00:17:17.260 [2024-12-06 13:11:23.628382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.260 13:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71102 00:17:17.260 [2024-12-06 13:11:23.628515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.260 [2024-12-06 13:11:23.628622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.260 [2024-12-06 13:11:23.628641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:17.518 [2024-12-06 13:11:23.983490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.893 13:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:18.894 00:17:18.894 real 0m6.275s 00:17:18.894 user 0m9.323s 00:17:18.894 sys 0m0.983s 00:17:18.894 13:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.894 13:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.894 ************************************ 00:17:18.894 END TEST raid_superblock_test 00:17:18.894 ************************************ 00:17:18.894 13:11:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:17:18.894 13:11:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:18.894 13:11:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.894 13:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.894 ************************************ 00:17:18.894 START TEST raid_read_error_test 00:17:18.894 ************************************ 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LoiOtgKi0M 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71372 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71372 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71372 ']' 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.894 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.894 [2024-12-06 13:11:25.297277] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:18.894 [2024-12-06 13:11:25.297484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71372 ] 00:17:19.152 [2024-12-06 13:11:25.474306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.152 [2024-12-06 13:11:25.623046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.410 [2024-12-06 13:11:25.846817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.410 [2024-12-06 13:11:25.846920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 BaseBdev1_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 true 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-12-06 13:11:26.388003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:19.992 [2024-12-06 13:11:26.388094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.992 [2024-12-06 13:11:26.388134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:19.992 [2024-12-06 13:11:26.388155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.992 [2024-12-06 13:11:26.391418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.992 [2024-12-06 13:11:26.391502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:19.992 BaseBdev1 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 BaseBdev2_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 true 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-12-06 13:11:26.460477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:19.992 [2024-12-06 13:11:26.460584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.992 [2024-12-06 13:11:26.460621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:19.992 [2024-12-06 13:11:26.460640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.992 [2024-12-06 13:11:26.463872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.992 [2024-12-06 13:11:26.463930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:19.992 BaseBdev2 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 BaseBdev3_malloc 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 true 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 [2024-12-06 13:11:26.542959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:20.251 [2024-12-06 13:11:26.543222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.251 [2024-12-06 13:11:26.543269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:20.251 [2024-12-06 13:11:26.543291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.251 [2024-12-06 13:11:26.546526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.251 [2024-12-06 13:11:26.546582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:20.251 BaseBdev3 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 BaseBdev4_malloc 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 true 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 [2024-12-06 13:11:26.615102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:20.251 [2024-12-06 13:11:26.615203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.251 [2024-12-06 13:11:26.615242] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:20.251 [2024-12-06 13:11:26.615263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.251 [2024-12-06 13:11:26.618436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.251 [2024-12-06 13:11:26.618512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:20.251 BaseBdev4 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 [2024-12-06 13:11:26.627259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.251 [2024-12-06 13:11:26.629886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.251 [2024-12-06 13:11:26.630151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.251 [2024-12-06 13:11:26.630289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:20.251 [2024-12-06 13:11:26.630637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:20.251 [2024-12-06 13:11:26.630668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:20.251 [2024-12-06 13:11:26.631025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:20.251 [2024-12-06 13:11:26.631262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:20.251 [2024-12-06 13:11:26.631283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:20.251 [2024-12-06 13:11:26.631593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.251 "name": "raid_bdev1", 00:17:20.251 "uuid": "b7a36d01-f07d-42ae-9faa-0e0b9234e07a", 00:17:20.251 "strip_size_kb": 64, 00:17:20.251 "state": "online", 00:17:20.251 "raid_level": "raid0", 00:17:20.251 "superblock": true, 00:17:20.251 "num_base_bdevs": 4, 00:17:20.251 "num_base_bdevs_discovered": 4, 00:17:20.251 "num_base_bdevs_operational": 4, 00:17:20.251 "base_bdevs_list": [ 00:17:20.251 { 00:17:20.251 "name": "BaseBdev1", 00:17:20.251 "uuid": "b4204ed7-866f-592e-ae04-646615898a30", 00:17:20.251 "is_configured": true, 00:17:20.251 "data_offset": 2048, 00:17:20.251 "data_size": 63488 00:17:20.251 }, 00:17:20.251 { 00:17:20.251 "name": "BaseBdev2", 00:17:20.251 "uuid": "4eb94e0b-a1d8-5766-822f-541bf69398f2", 00:17:20.251 "is_configured": true, 00:17:20.251 "data_offset": 2048, 00:17:20.251 "data_size": 63488 00:17:20.251 }, 00:17:20.251 { 00:17:20.251 "name": "BaseBdev3", 00:17:20.251 "uuid": "d96ef665-81c1-56bd-849a-9277b2be74e0", 00:17:20.251 "is_configured": true, 00:17:20.251 "data_offset": 2048, 00:17:20.251 "data_size": 63488 00:17:20.251 }, 00:17:20.251 { 00:17:20.251 "name": "BaseBdev4", 00:17:20.251 "uuid": "32042a63-7b56-5429-9708-ef70212e572f", 00:17:20.251 "is_configured": true, 00:17:20.251 "data_offset": 2048, 00:17:20.251 "data_size": 63488 00:17:20.251 } 00:17:20.251 ] 00:17:20.251 }' 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.251 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.816 13:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:20.816 13:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:20.816 [2024-12-06 13:11:27.285259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.749 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.749 "name": "raid_bdev1", 00:17:21.749 "uuid": "b7a36d01-f07d-42ae-9faa-0e0b9234e07a", 00:17:21.749 "strip_size_kb": 64, 00:17:21.750 "state": "online", 00:17:21.750 "raid_level": "raid0", 00:17:21.750 "superblock": true, 00:17:21.750 "num_base_bdevs": 4, 00:17:21.750 "num_base_bdevs_discovered": 4, 00:17:21.750 "num_base_bdevs_operational": 4, 00:17:21.750 "base_bdevs_list": [ 00:17:21.750 { 00:17:21.750 "name": "BaseBdev1", 00:17:21.750 "uuid": "b4204ed7-866f-592e-ae04-646615898a30", 00:17:21.750 "is_configured": true, 00:17:21.750 "data_offset": 2048, 00:17:21.750 "data_size": 63488 00:17:21.750 }, 00:17:21.750 { 00:17:21.750 "name": "BaseBdev2", 00:17:21.750 "uuid": "4eb94e0b-a1d8-5766-822f-541bf69398f2", 00:17:21.750 "is_configured": true, 00:17:21.750 "data_offset": 2048, 00:17:21.750 "data_size": 63488 00:17:21.750 }, 00:17:21.750 { 00:17:21.750 "name": "BaseBdev3", 00:17:21.750 "uuid": "d96ef665-81c1-56bd-849a-9277b2be74e0", 00:17:21.750 "is_configured": true, 00:17:21.750 "data_offset": 2048, 00:17:21.750 "data_size": 63488 00:17:21.750 }, 00:17:21.750 { 00:17:21.750 "name": "BaseBdev4", 00:17:21.750 "uuid": "32042a63-7b56-5429-9708-ef70212e572f", 00:17:21.750 "is_configured": true, 00:17:21.750 "data_offset": 2048, 00:17:21.750 "data_size": 63488 00:17:21.750 } 00:17:21.750 ] 00:17:21.750 }' 00:17:21.750 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.750 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.316 [2024-12-06 13:11:28.683770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.316 [2024-12-06 13:11:28.683820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.316 { 00:17:22.316 "results": [ 00:17:22.316 { 00:17:22.316 "job": "raid_bdev1", 00:17:22.316 "core_mask": "0x1", 00:17:22.316 "workload": "randrw", 00:17:22.316 "percentage": 50, 00:17:22.316 "status": "finished", 00:17:22.316 "queue_depth": 1, 00:17:22.316 "io_size": 131072, 00:17:22.316 "runtime": 1.395531, 00:17:22.316 "iops": 9407.171893709276, 00:17:22.316 "mibps": 1175.8964867136594, 00:17:22.316 "io_failed": 1, 00:17:22.316 "io_timeout": 0, 00:17:22.316 "avg_latency_us": 149.45074872419832, 00:17:22.316 "min_latency_us": 45.38181818181818, 00:17:22.316 "max_latency_us": 1869.2654545454545 00:17:22.316 } 00:17:22.316 ], 00:17:22.316 "core_count": 1 00:17:22.316 } 00:17:22.316 [2024-12-06 13:11:28.687345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.316 [2024-12-06 13:11:28.687440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.316 [2024-12-06 13:11:28.687538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.316 [2024-12-06 13:11:28.687561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71372 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71372 ']' 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71372 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71372 00:17:22.316 killing process with pid 71372 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71372' 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71372 00:17:22.316 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71372 00:17:22.316 [2024-12-06 13:11:28.727789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.574 [2024-12-06 13:11:29.045877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.951 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LoiOtgKi0M 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:23.952 ************************************ 00:17:23.952 END TEST raid_read_error_test 00:17:23.952 ************************************ 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:17:23.952 00:17:23.952 real 0m5.078s 00:17:23.952 user 0m6.168s 00:17:23.952 sys 0m0.681s 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.952 13:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.952 13:11:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:17:23.952 13:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:23.952 13:11:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.952 13:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.952 ************************************ 00:17:23.952 START TEST raid_write_error_test 00:17:23.952 ************************************ 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Lo4Fzapqfv 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71523 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71523 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71523 ']' 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.952 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.952 [2024-12-06 13:11:30.420987] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:23.952 [2024-12-06 13:11:30.421322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71523 ] 00:17:24.210 [2024-12-06 13:11:30.597063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.468 [2024-12-06 13:11:30.744949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.468 [2024-12-06 13:11:30.971254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.468 [2024-12-06 13:11:30.971675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 BaseBdev1_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 true 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 [2024-12-06 13:11:31.429055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:25.036 [2024-12-06 13:11:31.429146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.036 [2024-12-06 13:11:31.429182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:25.036 [2024-12-06 13:11:31.429202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.036 [2024-12-06 13:11:31.432276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.036 [2024-12-06 13:11:31.432507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.036 BaseBdev1 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 BaseBdev2_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 true 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 [2024-12-06 13:11:31.501860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:25.036 [2024-12-06 13:11:31.502080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.036 [2024-12-06 13:11:31.502248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:25.036 [2024-12-06 13:11:31.502373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.036 [2024-12-06 13:11:31.505512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.036 [2024-12-06 13:11:31.505691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.036 BaseBdev2 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 BaseBdev3_malloc 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 true 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 [2024-12-06 13:11:31.592585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:25.295 [2024-12-06 13:11:31.592673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.295 [2024-12-06 13:11:31.592708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:25.295 [2024-12-06 13:11:31.592729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.295 [2024-12-06 13:11:31.595892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.295 [2024-12-06 13:11:31.595959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:25.295 BaseBdev3 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 BaseBdev4_malloc 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 true 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 [2024-12-06 13:11:31.661238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:25.295 [2024-12-06 13:11:31.661316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.295 [2024-12-06 13:11:31.661348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:25.295 [2024-12-06 13:11:31.661367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.295 [2024-12-06 13:11:31.664393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.295 [2024-12-06 13:11:31.664466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:25.295 BaseBdev4 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.295 [2024-12-06 13:11:31.669441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.295 [2024-12-06 13:11:31.672164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.295 [2024-12-06 13:11:31.672285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.295 [2024-12-06 13:11:31.672393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.295 [2024-12-06 13:11:31.672759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:25.295 [2024-12-06 13:11:31.672794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:25.295 [2024-12-06 13:11:31.673151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:25.295 [2024-12-06 13:11:31.673393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:25.295 [2024-12-06 13:11:31.673412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:25.295 [2024-12-06 13:11:31.673726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.295 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.296 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.296 "name": "raid_bdev1", 00:17:25.296 "uuid": "5e956e58-eb6f-4587-9efa-ed0bfe0514ab", 00:17:25.296 "strip_size_kb": 64, 00:17:25.296 "state": "online", 00:17:25.296 "raid_level": "raid0", 00:17:25.296 "superblock": true, 00:17:25.296 "num_base_bdevs": 4, 00:17:25.296 "num_base_bdevs_discovered": 4, 00:17:25.296 "num_base_bdevs_operational": 4, 00:17:25.296 "base_bdevs_list": [ 00:17:25.296 { 00:17:25.296 "name": "BaseBdev1", 00:17:25.296 "uuid": "6607d187-dcba-527f-9242-02b7e57ad9a0", 00:17:25.296 "is_configured": true, 00:17:25.296 "data_offset": 2048, 00:17:25.296 "data_size": 63488 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "name": "BaseBdev2", 00:17:25.296 "uuid": "d34518b1-04cc-524c-b02b-27f4fb774213", 00:17:25.296 "is_configured": true, 00:17:25.296 "data_offset": 2048, 00:17:25.296 "data_size": 63488 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "name": "BaseBdev3", 00:17:25.296 "uuid": "eaaa3a5c-46ac-5d72-b813-9335a587e9bd", 00:17:25.296 "is_configured": true, 00:17:25.296 "data_offset": 2048, 00:17:25.296 "data_size": 63488 00:17:25.296 }, 00:17:25.296 { 00:17:25.296 "name": "BaseBdev4", 00:17:25.296 "uuid": "b68feec8-1117-59d3-9df1-4e66aefc8f12", 00:17:25.296 "is_configured": true, 00:17:25.296 "data_offset": 2048, 00:17:25.296 "data_size": 63488 00:17:25.296 } 00:17:25.296 ] 00:17:25.296 }' 00:17:25.296 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.296 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.863 13:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:25.863 13:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:25.863 [2024-12-06 13:11:32.279429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.796 "name": "raid_bdev1", 00:17:26.796 "uuid": "5e956e58-eb6f-4587-9efa-ed0bfe0514ab", 00:17:26.796 "strip_size_kb": 64, 00:17:26.796 "state": "online", 00:17:26.796 "raid_level": "raid0", 00:17:26.796 "superblock": true, 00:17:26.796 "num_base_bdevs": 4, 00:17:26.796 "num_base_bdevs_discovered": 4, 00:17:26.796 "num_base_bdevs_operational": 4, 00:17:26.796 "base_bdevs_list": [ 00:17:26.796 { 00:17:26.796 "name": "BaseBdev1", 00:17:26.796 "uuid": "6607d187-dcba-527f-9242-02b7e57ad9a0", 00:17:26.796 "is_configured": true, 00:17:26.796 "data_offset": 2048, 00:17:26.796 "data_size": 63488 00:17:26.796 }, 00:17:26.796 { 00:17:26.796 "name": "BaseBdev2", 00:17:26.796 "uuid": "d34518b1-04cc-524c-b02b-27f4fb774213", 00:17:26.796 "is_configured": true, 00:17:26.796 "data_offset": 2048, 00:17:26.796 "data_size": 63488 00:17:26.796 }, 00:17:26.796 { 00:17:26.796 "name": "BaseBdev3", 00:17:26.796 "uuid": "eaaa3a5c-46ac-5d72-b813-9335a587e9bd", 00:17:26.796 "is_configured": true, 00:17:26.796 "data_offset": 2048, 00:17:26.796 "data_size": 63488 00:17:26.796 }, 00:17:26.796 { 00:17:26.796 "name": "BaseBdev4", 00:17:26.796 "uuid": "b68feec8-1117-59d3-9df1-4e66aefc8f12", 00:17:26.796 "is_configured": true, 00:17:26.796 "data_offset": 2048, 00:17:26.796 "data_size": 63488 00:17:26.796 } 00:17:26.796 ] 00:17:26.796 }' 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.796 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.363 [2024-12-06 13:11:33.687428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.363 [2024-12-06 13:11:33.687730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.363 [2024-12-06 13:11:33.691622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.363 [2024-12-06 13:11:33.691928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.363 [2024-12-06 13:11:33.692132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.363 [2024-12-06 13:11:33.692309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:17:27.363 "results": [ 00:17:27.363 { 00:17:27.363 "job": "raid_bdev1", 00:17:27.363 "core_mask": "0x1", 00:17:27.363 "workload": "randrw", 00:17:27.363 "percentage": 50, 00:17:27.363 "status": "finished", 00:17:27.363 "queue_depth": 1, 00:17:27.363 "io_size": 131072, 00:17:27.363 "runtime": 1.404898, 00:17:27.363 "iops": 9141.58892674059, 00:17:27.363 "mibps": 1142.6986158425736, 00:17:27.363 "io_failed": 1, 00:17:27.363 "io_timeout": 0, 00:17:27.363 "avg_latency_us": 153.71529458395858, 00:17:27.363 "min_latency_us": 43.52, 00:17:27.363 "max_latency_us": 2025.658181818182 00:17:27.363 } 00:17:27.363 ], 00:17:27.363 "core_count": 1 00:17:27.363 } 00:17:27.363 te offline 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71523 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71523 ']' 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71523 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71523 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71523' 00:17:27.363 killing process with pid 71523 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71523 00:17:27.363 [2024-12-06 13:11:33.732055] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.363 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71523 00:17:27.622 [2024-12-06 13:11:34.029929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Lo4Fzapqfv 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:17:29.017 ************************************ 00:17:29.017 END TEST raid_write_error_test 00:17:29.017 ************************************ 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:17:29.017 00:17:29.017 real 0m4.890s 00:17:29.017 user 0m5.876s 00:17:29.017 sys 0m0.653s 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.017 13:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.017 13:11:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:29.017 13:11:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:29.017 13:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:29.017 13:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.017 13:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.017 ************************************ 00:17:29.017 START TEST raid_state_function_test 00:17:29.017 ************************************ 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71667 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71667' 00:17:29.017 Process raid pid: 71667 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71667 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71667 ']' 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.017 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.017 [2024-12-06 13:11:35.364249] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:29.017 [2024-12-06 13:11:35.364438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.276 [2024-12-06 13:11:35.543121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.276 [2024-12-06 13:11:35.693435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.534 [2024-12-06 13:11:35.920667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.534 [2024-12-06 13:11:35.920724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.104 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.104 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 [2024-12-06 13:11:36.415574] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.105 [2024-12-06 13:11:36.415673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.105 [2024-12-06 13:11:36.415691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.105 [2024-12-06 13:11:36.415709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.105 [2024-12-06 13:11:36.415720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.105 [2024-12-06 13:11:36.415735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.105 [2024-12-06 13:11:36.415745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.105 [2024-12-06 13:11:36.415760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.105 "name": "Existed_Raid", 00:17:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.105 "strip_size_kb": 64, 00:17:30.105 "state": "configuring", 00:17:30.105 "raid_level": "concat", 00:17:30.105 "superblock": false, 00:17:30.105 "num_base_bdevs": 4, 00:17:30.105 "num_base_bdevs_discovered": 0, 00:17:30.105 "num_base_bdevs_operational": 4, 00:17:30.105 "base_bdevs_list": [ 00:17:30.105 { 00:17:30.105 "name": "BaseBdev1", 00:17:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.105 "is_configured": false, 00:17:30.105 "data_offset": 0, 00:17:30.105 "data_size": 0 00:17:30.105 }, 00:17:30.105 { 00:17:30.105 "name": "BaseBdev2", 00:17:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.105 "is_configured": false, 00:17:30.105 "data_offset": 0, 00:17:30.105 "data_size": 0 00:17:30.105 }, 00:17:30.105 { 00:17:30.105 "name": "BaseBdev3", 00:17:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.105 "is_configured": false, 00:17:30.105 "data_offset": 0, 00:17:30.105 "data_size": 0 00:17:30.105 }, 00:17:30.105 { 00:17:30.105 "name": "BaseBdev4", 00:17:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.105 "is_configured": false, 00:17:30.105 "data_offset": 0, 00:17:30.105 "data_size": 0 00:17:30.105 } 00:17:30.105 ] 00:17:30.105 }' 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.105 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 [2024-12-06 13:11:36.959654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.673 [2024-12-06 13:11:36.959851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 [2024-12-06 13:11:36.967647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.673 [2024-12-06 13:11:36.967708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.673 [2024-12-06 13:11:36.967724] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.673 [2024-12-06 13:11:36.967742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.673 [2024-12-06 13:11:36.967752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.673 [2024-12-06 13:11:36.967767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.673 [2024-12-06 13:11:36.967777] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.673 [2024-12-06 13:11:36.967791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 [2024-12-06 13:11:37.016579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.673 BaseBdev1 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 [ 00:17:30.673 { 00:17:30.673 "name": "BaseBdev1", 00:17:30.673 "aliases": [ 00:17:30.673 "06b5fb3b-5f47-4855-abbe-0265aca885b6" 00:17:30.673 ], 00:17:30.673 "product_name": "Malloc disk", 00:17:30.673 "block_size": 512, 00:17:30.673 "num_blocks": 65536, 00:17:30.673 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:30.673 "assigned_rate_limits": { 00:17:30.673 "rw_ios_per_sec": 0, 00:17:30.673 "rw_mbytes_per_sec": 0, 00:17:30.673 "r_mbytes_per_sec": 0, 00:17:30.673 "w_mbytes_per_sec": 0 00:17:30.673 }, 00:17:30.673 "claimed": true, 00:17:30.673 "claim_type": "exclusive_write", 00:17:30.673 "zoned": false, 00:17:30.673 "supported_io_types": { 00:17:30.673 "read": true, 00:17:30.673 "write": true, 00:17:30.673 "unmap": true, 00:17:30.673 "flush": true, 00:17:30.673 "reset": true, 00:17:30.673 "nvme_admin": false, 00:17:30.673 "nvme_io": false, 00:17:30.673 "nvme_io_md": false, 00:17:30.673 "write_zeroes": true, 00:17:30.673 "zcopy": true, 00:17:30.673 "get_zone_info": false, 00:17:30.673 "zone_management": false, 00:17:30.673 "zone_append": false, 00:17:30.673 "compare": false, 00:17:30.673 "compare_and_write": false, 00:17:30.673 "abort": true, 00:17:30.673 "seek_hole": false, 00:17:30.673 "seek_data": false, 00:17:30.673 "copy": true, 00:17:30.673 "nvme_iov_md": false 00:17:30.673 }, 00:17:30.673 "memory_domains": [ 00:17:30.673 { 00:17:30.673 "dma_device_id": "system", 00:17:30.673 "dma_device_type": 1 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.673 "dma_device_type": 2 00:17:30.673 } 00:17:30.673 ], 00:17:30.673 "driver_specific": {} 00:17:30.673 } 00:17:30.673 ] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.673 "name": "Existed_Raid", 00:17:30.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.673 "strip_size_kb": 64, 00:17:30.673 "state": "configuring", 00:17:30.673 "raid_level": "concat", 00:17:30.673 "superblock": false, 00:17:30.673 "num_base_bdevs": 4, 00:17:30.673 "num_base_bdevs_discovered": 1, 00:17:30.673 "num_base_bdevs_operational": 4, 00:17:30.673 "base_bdevs_list": [ 00:17:30.673 { 00:17:30.673 "name": "BaseBdev1", 00:17:30.673 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:30.673 "is_configured": true, 00:17:30.673 "data_offset": 0, 00:17:30.673 "data_size": 65536 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "name": "BaseBdev2", 00:17:30.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.673 "is_configured": false, 00:17:30.673 "data_offset": 0, 00:17:30.673 "data_size": 0 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "name": "BaseBdev3", 00:17:30.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.673 "is_configured": false, 00:17:30.673 "data_offset": 0, 00:17:30.673 "data_size": 0 00:17:30.673 }, 00:17:30.673 { 00:17:30.673 "name": "BaseBdev4", 00:17:30.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.673 "is_configured": false, 00:17:30.673 "data_offset": 0, 00:17:30.673 "data_size": 0 00:17:30.673 } 00:17:30.673 ] 00:17:30.673 }' 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.673 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.241 [2024-12-06 13:11:37.556800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.241 [2024-12-06 13:11:37.556878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.241 [2024-12-06 13:11:37.564843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.241 [2024-12-06 13:11:37.567536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.241 [2024-12-06 13:11:37.567602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.241 [2024-12-06 13:11:37.567620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.241 [2024-12-06 13:11:37.567639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.241 [2024-12-06 13:11:37.567652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:31.241 [2024-12-06 13:11:37.567667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.241 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.241 "name": "Existed_Raid", 00:17:31.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.241 "strip_size_kb": 64, 00:17:31.241 "state": "configuring", 00:17:31.241 "raid_level": "concat", 00:17:31.242 "superblock": false, 00:17:31.242 "num_base_bdevs": 4, 00:17:31.242 "num_base_bdevs_discovered": 1, 00:17:31.242 "num_base_bdevs_operational": 4, 00:17:31.242 "base_bdevs_list": [ 00:17:31.242 { 00:17:31.242 "name": "BaseBdev1", 00:17:31.242 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:31.242 "is_configured": true, 00:17:31.242 "data_offset": 0, 00:17:31.242 "data_size": 65536 00:17:31.242 }, 00:17:31.242 { 00:17:31.242 "name": "BaseBdev2", 00:17:31.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.242 "is_configured": false, 00:17:31.242 "data_offset": 0, 00:17:31.242 "data_size": 0 00:17:31.242 }, 00:17:31.242 { 00:17:31.242 "name": "BaseBdev3", 00:17:31.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.242 "is_configured": false, 00:17:31.242 "data_offset": 0, 00:17:31.242 "data_size": 0 00:17:31.242 }, 00:17:31.242 { 00:17:31.242 "name": "BaseBdev4", 00:17:31.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.242 "is_configured": false, 00:17:31.242 "data_offset": 0, 00:17:31.242 "data_size": 0 00:17:31.242 } 00:17:31.242 ] 00:17:31.242 }' 00:17:31.242 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.242 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.809 [2024-12-06 13:11:38.135047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.809 BaseBdev2 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:31.809 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.810 [ 00:17:31.810 { 00:17:31.810 "name": "BaseBdev2", 00:17:31.810 "aliases": [ 00:17:31.810 "d43a0c8a-9c75-4a65-8815-4db46997331b" 00:17:31.810 ], 00:17:31.810 "product_name": "Malloc disk", 00:17:31.810 "block_size": 512, 00:17:31.810 "num_blocks": 65536, 00:17:31.810 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:31.810 "assigned_rate_limits": { 00:17:31.810 "rw_ios_per_sec": 0, 00:17:31.810 "rw_mbytes_per_sec": 0, 00:17:31.810 "r_mbytes_per_sec": 0, 00:17:31.810 "w_mbytes_per_sec": 0 00:17:31.810 }, 00:17:31.810 "claimed": true, 00:17:31.810 "claim_type": "exclusive_write", 00:17:31.810 "zoned": false, 00:17:31.810 "supported_io_types": { 00:17:31.810 "read": true, 00:17:31.810 "write": true, 00:17:31.810 "unmap": true, 00:17:31.810 "flush": true, 00:17:31.810 "reset": true, 00:17:31.810 "nvme_admin": false, 00:17:31.810 "nvme_io": false, 00:17:31.810 "nvme_io_md": false, 00:17:31.810 "write_zeroes": true, 00:17:31.810 "zcopy": true, 00:17:31.810 "get_zone_info": false, 00:17:31.810 "zone_management": false, 00:17:31.810 "zone_append": false, 00:17:31.810 "compare": false, 00:17:31.810 "compare_and_write": false, 00:17:31.810 "abort": true, 00:17:31.810 "seek_hole": false, 00:17:31.810 "seek_data": false, 00:17:31.810 "copy": true, 00:17:31.810 "nvme_iov_md": false 00:17:31.810 }, 00:17:31.810 "memory_domains": [ 00:17:31.810 { 00:17:31.810 "dma_device_id": "system", 00:17:31.810 "dma_device_type": 1 00:17:31.810 }, 00:17:31.810 { 00:17:31.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.810 "dma_device_type": 2 00:17:31.810 } 00:17:31.810 ], 00:17:31.810 "driver_specific": {} 00:17:31.810 } 00:17:31.810 ] 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.810 "name": "Existed_Raid", 00:17:31.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.810 "strip_size_kb": 64, 00:17:31.810 "state": "configuring", 00:17:31.810 "raid_level": "concat", 00:17:31.810 "superblock": false, 00:17:31.810 "num_base_bdevs": 4, 00:17:31.810 "num_base_bdevs_discovered": 2, 00:17:31.810 "num_base_bdevs_operational": 4, 00:17:31.810 "base_bdevs_list": [ 00:17:31.810 { 00:17:31.810 "name": "BaseBdev1", 00:17:31.810 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:31.810 "is_configured": true, 00:17:31.810 "data_offset": 0, 00:17:31.810 "data_size": 65536 00:17:31.810 }, 00:17:31.810 { 00:17:31.810 "name": "BaseBdev2", 00:17:31.810 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:31.810 "is_configured": true, 00:17:31.810 "data_offset": 0, 00:17:31.810 "data_size": 65536 00:17:31.810 }, 00:17:31.810 { 00:17:31.810 "name": "BaseBdev3", 00:17:31.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.810 "is_configured": false, 00:17:31.810 "data_offset": 0, 00:17:31.810 "data_size": 0 00:17:31.810 }, 00:17:31.810 { 00:17:31.810 "name": "BaseBdev4", 00:17:31.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.810 "is_configured": false, 00:17:31.810 "data_offset": 0, 00:17:31.810 "data_size": 0 00:17:31.810 } 00:17:31.810 ] 00:17:31.810 }' 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.810 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 [2024-12-06 13:11:38.779014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.377 BaseBdev3 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 [ 00:17:32.377 { 00:17:32.377 "name": "BaseBdev3", 00:17:32.377 "aliases": [ 00:17:32.377 "bceed91b-d305-4e46-afe9-8e7e451208de" 00:17:32.377 ], 00:17:32.377 "product_name": "Malloc disk", 00:17:32.377 "block_size": 512, 00:17:32.377 "num_blocks": 65536, 00:17:32.377 "uuid": "bceed91b-d305-4e46-afe9-8e7e451208de", 00:17:32.377 "assigned_rate_limits": { 00:17:32.377 "rw_ios_per_sec": 0, 00:17:32.377 "rw_mbytes_per_sec": 0, 00:17:32.377 "r_mbytes_per_sec": 0, 00:17:32.377 "w_mbytes_per_sec": 0 00:17:32.377 }, 00:17:32.377 "claimed": true, 00:17:32.377 "claim_type": "exclusive_write", 00:17:32.377 "zoned": false, 00:17:32.377 "supported_io_types": { 00:17:32.377 "read": true, 00:17:32.377 "write": true, 00:17:32.377 "unmap": true, 00:17:32.377 "flush": true, 00:17:32.377 "reset": true, 00:17:32.377 "nvme_admin": false, 00:17:32.377 "nvme_io": false, 00:17:32.377 "nvme_io_md": false, 00:17:32.377 "write_zeroes": true, 00:17:32.377 "zcopy": true, 00:17:32.377 "get_zone_info": false, 00:17:32.377 "zone_management": false, 00:17:32.377 "zone_append": false, 00:17:32.377 "compare": false, 00:17:32.377 "compare_and_write": false, 00:17:32.377 "abort": true, 00:17:32.377 "seek_hole": false, 00:17:32.377 "seek_data": false, 00:17:32.377 "copy": true, 00:17:32.377 "nvme_iov_md": false 00:17:32.377 }, 00:17:32.377 "memory_domains": [ 00:17:32.377 { 00:17:32.377 "dma_device_id": "system", 00:17:32.377 "dma_device_type": 1 00:17:32.377 }, 00:17:32.377 { 00:17:32.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.377 "dma_device_type": 2 00:17:32.377 } 00:17:32.377 ], 00:17:32.377 "driver_specific": {} 00:17:32.377 } 00:17:32.377 ] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.377 "name": "Existed_Raid", 00:17:32.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.377 "strip_size_kb": 64, 00:17:32.377 "state": "configuring", 00:17:32.377 "raid_level": "concat", 00:17:32.377 "superblock": false, 00:17:32.377 "num_base_bdevs": 4, 00:17:32.377 "num_base_bdevs_discovered": 3, 00:17:32.377 "num_base_bdevs_operational": 4, 00:17:32.377 "base_bdevs_list": [ 00:17:32.377 { 00:17:32.377 "name": "BaseBdev1", 00:17:32.377 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:32.377 "is_configured": true, 00:17:32.377 "data_offset": 0, 00:17:32.377 "data_size": 65536 00:17:32.377 }, 00:17:32.377 { 00:17:32.377 "name": "BaseBdev2", 00:17:32.377 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:32.377 "is_configured": true, 00:17:32.377 "data_offset": 0, 00:17:32.377 "data_size": 65536 00:17:32.377 }, 00:17:32.377 { 00:17:32.377 "name": "BaseBdev3", 00:17:32.377 "uuid": "bceed91b-d305-4e46-afe9-8e7e451208de", 00:17:32.377 "is_configured": true, 00:17:32.377 "data_offset": 0, 00:17:32.377 "data_size": 65536 00:17:32.377 }, 00:17:32.377 { 00:17:32.377 "name": "BaseBdev4", 00:17:32.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.377 "is_configured": false, 00:17:32.377 "data_offset": 0, 00:17:32.377 "data_size": 0 00:17:32.377 } 00:17:32.377 ] 00:17:32.377 }' 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.377 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.946 [2024-12-06 13:11:39.369351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.946 [2024-12-06 13:11:39.369739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.946 [2024-12-06 13:11:39.369767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:32.946 [2024-12-06 13:11:39.370162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.946 [2024-12-06 13:11:39.370414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.946 [2024-12-06 13:11:39.370437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.946 [2024-12-06 13:11:39.370821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.946 BaseBdev4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.946 [ 00:17:32.946 { 00:17:32.946 "name": "BaseBdev4", 00:17:32.946 "aliases": [ 00:17:32.946 "09cd20aa-9524-42c2-aeef-ac30d0274c27" 00:17:32.946 ], 00:17:32.946 "product_name": "Malloc disk", 00:17:32.946 "block_size": 512, 00:17:32.946 "num_blocks": 65536, 00:17:32.946 "uuid": "09cd20aa-9524-42c2-aeef-ac30d0274c27", 00:17:32.946 "assigned_rate_limits": { 00:17:32.946 "rw_ios_per_sec": 0, 00:17:32.946 "rw_mbytes_per_sec": 0, 00:17:32.946 "r_mbytes_per_sec": 0, 00:17:32.946 "w_mbytes_per_sec": 0 00:17:32.946 }, 00:17:32.946 "claimed": true, 00:17:32.946 "claim_type": "exclusive_write", 00:17:32.946 "zoned": false, 00:17:32.946 "supported_io_types": { 00:17:32.946 "read": true, 00:17:32.946 "write": true, 00:17:32.946 "unmap": true, 00:17:32.946 "flush": true, 00:17:32.946 "reset": true, 00:17:32.946 "nvme_admin": false, 00:17:32.946 "nvme_io": false, 00:17:32.946 "nvme_io_md": false, 00:17:32.946 "write_zeroes": true, 00:17:32.946 "zcopy": true, 00:17:32.946 "get_zone_info": false, 00:17:32.946 "zone_management": false, 00:17:32.946 "zone_append": false, 00:17:32.946 "compare": false, 00:17:32.946 "compare_and_write": false, 00:17:32.946 "abort": true, 00:17:32.946 "seek_hole": false, 00:17:32.946 "seek_data": false, 00:17:32.946 "copy": true, 00:17:32.946 "nvme_iov_md": false 00:17:32.946 }, 00:17:32.946 "memory_domains": [ 00:17:32.946 { 00:17:32.946 "dma_device_id": "system", 00:17:32.946 "dma_device_type": 1 00:17:32.946 }, 00:17:32.946 { 00:17:32.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.946 "dma_device_type": 2 00:17:32.946 } 00:17:32.946 ], 00:17:32.946 "driver_specific": {} 00:17:32.946 } 00:17:32.946 ] 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.946 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.947 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.947 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.947 "name": "Existed_Raid", 00:17:32.947 "uuid": "9f17e837-2317-4cbe-a71c-acc598413727", 00:17:32.947 "strip_size_kb": 64, 00:17:32.947 "state": "online", 00:17:32.947 "raid_level": "concat", 00:17:32.947 "superblock": false, 00:17:32.947 "num_base_bdevs": 4, 00:17:32.947 "num_base_bdevs_discovered": 4, 00:17:32.947 "num_base_bdevs_operational": 4, 00:17:32.947 "base_bdevs_list": [ 00:17:32.947 { 00:17:32.947 "name": "BaseBdev1", 00:17:32.947 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:32.947 "is_configured": true, 00:17:32.947 "data_offset": 0, 00:17:32.947 "data_size": 65536 00:17:32.947 }, 00:17:32.947 { 00:17:32.947 "name": "BaseBdev2", 00:17:32.947 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:32.947 "is_configured": true, 00:17:32.947 "data_offset": 0, 00:17:32.947 "data_size": 65536 00:17:32.947 }, 00:17:32.947 { 00:17:32.947 "name": "BaseBdev3", 00:17:32.947 "uuid": "bceed91b-d305-4e46-afe9-8e7e451208de", 00:17:32.947 "is_configured": true, 00:17:32.947 "data_offset": 0, 00:17:32.947 "data_size": 65536 00:17:32.947 }, 00:17:32.947 { 00:17:32.947 "name": "BaseBdev4", 00:17:32.947 "uuid": "09cd20aa-9524-42c2-aeef-ac30d0274c27", 00:17:32.947 "is_configured": true, 00:17:32.947 "data_offset": 0, 00:17:32.947 "data_size": 65536 00:17:32.947 } 00:17:32.947 ] 00:17:32.947 }' 00:17:32.947 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.947 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.512 [2024-12-06 13:11:39.934054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.512 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.512 "name": "Existed_Raid", 00:17:33.512 "aliases": [ 00:17:33.512 "9f17e837-2317-4cbe-a71c-acc598413727" 00:17:33.512 ], 00:17:33.512 "product_name": "Raid Volume", 00:17:33.512 "block_size": 512, 00:17:33.512 "num_blocks": 262144, 00:17:33.512 "uuid": "9f17e837-2317-4cbe-a71c-acc598413727", 00:17:33.512 "assigned_rate_limits": { 00:17:33.512 "rw_ios_per_sec": 0, 00:17:33.512 "rw_mbytes_per_sec": 0, 00:17:33.512 "r_mbytes_per_sec": 0, 00:17:33.512 "w_mbytes_per_sec": 0 00:17:33.512 }, 00:17:33.512 "claimed": false, 00:17:33.512 "zoned": false, 00:17:33.512 "supported_io_types": { 00:17:33.512 "read": true, 00:17:33.512 "write": true, 00:17:33.512 "unmap": true, 00:17:33.512 "flush": true, 00:17:33.512 "reset": true, 00:17:33.512 "nvme_admin": false, 00:17:33.512 "nvme_io": false, 00:17:33.512 "nvme_io_md": false, 00:17:33.512 "write_zeroes": true, 00:17:33.512 "zcopy": false, 00:17:33.512 "get_zone_info": false, 00:17:33.512 "zone_management": false, 00:17:33.512 "zone_append": false, 00:17:33.512 "compare": false, 00:17:33.512 "compare_and_write": false, 00:17:33.512 "abort": false, 00:17:33.512 "seek_hole": false, 00:17:33.512 "seek_data": false, 00:17:33.512 "copy": false, 00:17:33.512 "nvme_iov_md": false 00:17:33.512 }, 00:17:33.512 "memory_domains": [ 00:17:33.512 { 00:17:33.512 "dma_device_id": "system", 00:17:33.512 "dma_device_type": 1 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.512 "dma_device_type": 2 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "system", 00:17:33.512 "dma_device_type": 1 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.512 "dma_device_type": 2 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "system", 00:17:33.512 "dma_device_type": 1 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.512 "dma_device_type": 2 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "system", 00:17:33.512 "dma_device_type": 1 00:17:33.512 }, 00:17:33.512 { 00:17:33.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.512 "dma_device_type": 2 00:17:33.512 } 00:17:33.512 ], 00:17:33.512 "driver_specific": { 00:17:33.512 "raid": { 00:17:33.512 "uuid": "9f17e837-2317-4cbe-a71c-acc598413727", 00:17:33.512 "strip_size_kb": 64, 00:17:33.512 "state": "online", 00:17:33.512 "raid_level": "concat", 00:17:33.512 "superblock": false, 00:17:33.512 "num_base_bdevs": 4, 00:17:33.512 "num_base_bdevs_discovered": 4, 00:17:33.512 "num_base_bdevs_operational": 4, 00:17:33.512 "base_bdevs_list": [ 00:17:33.512 { 00:17:33.512 "name": "BaseBdev1", 00:17:33.513 "uuid": "06b5fb3b-5f47-4855-abbe-0265aca885b6", 00:17:33.513 "is_configured": true, 00:17:33.513 "data_offset": 0, 00:17:33.513 "data_size": 65536 00:17:33.513 }, 00:17:33.513 { 00:17:33.513 "name": "BaseBdev2", 00:17:33.513 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:33.513 "is_configured": true, 00:17:33.513 "data_offset": 0, 00:17:33.513 "data_size": 65536 00:17:33.513 }, 00:17:33.513 { 00:17:33.513 "name": "BaseBdev3", 00:17:33.513 "uuid": "bceed91b-d305-4e46-afe9-8e7e451208de", 00:17:33.513 "is_configured": true, 00:17:33.513 "data_offset": 0, 00:17:33.513 "data_size": 65536 00:17:33.513 }, 00:17:33.513 { 00:17:33.513 "name": "BaseBdev4", 00:17:33.513 "uuid": "09cd20aa-9524-42c2-aeef-ac30d0274c27", 00:17:33.513 "is_configured": true, 00:17:33.513 "data_offset": 0, 00:17:33.513 "data_size": 65536 00:17:33.513 } 00:17:33.513 ] 00:17:33.513 } 00:17:33.513 } 00:17:33.513 }' 00:17:33.513 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.513 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.513 BaseBdev2 00:17:33.513 BaseBdev3 00:17:33.513 BaseBdev4' 00:17:33.513 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.770 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.770 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.771 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.030 [2024-12-06 13:11:40.297786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.030 [2024-12-06 13:11:40.297831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.030 [2024-12-06 13:11:40.297907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.030 "name": "Existed_Raid", 00:17:34.030 "uuid": "9f17e837-2317-4cbe-a71c-acc598413727", 00:17:34.030 "strip_size_kb": 64, 00:17:34.030 "state": "offline", 00:17:34.030 "raid_level": "concat", 00:17:34.030 "superblock": false, 00:17:34.030 "num_base_bdevs": 4, 00:17:34.030 "num_base_bdevs_discovered": 3, 00:17:34.030 "num_base_bdevs_operational": 3, 00:17:34.030 "base_bdevs_list": [ 00:17:34.030 { 00:17:34.030 "name": null, 00:17:34.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.030 "is_configured": false, 00:17:34.030 "data_offset": 0, 00:17:34.030 "data_size": 65536 00:17:34.030 }, 00:17:34.030 { 00:17:34.030 "name": "BaseBdev2", 00:17:34.030 "uuid": "d43a0c8a-9c75-4a65-8815-4db46997331b", 00:17:34.030 "is_configured": true, 00:17:34.030 "data_offset": 0, 00:17:34.030 "data_size": 65536 00:17:34.030 }, 00:17:34.030 { 00:17:34.030 "name": "BaseBdev3", 00:17:34.030 "uuid": "bceed91b-d305-4e46-afe9-8e7e451208de", 00:17:34.030 "is_configured": true, 00:17:34.030 "data_offset": 0, 00:17:34.030 "data_size": 65536 00:17:34.030 }, 00:17:34.030 { 00:17:34.030 "name": "BaseBdev4", 00:17:34.030 "uuid": "09cd20aa-9524-42c2-aeef-ac30d0274c27", 00:17:34.030 "is_configured": true, 00:17:34.030 "data_offset": 0, 00:17:34.030 "data_size": 65536 00:17:34.030 } 00:17:34.030 ] 00:17:34.030 }' 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.030 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.596 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.596 [2024-12-06 13:11:40.937335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.596 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.596 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.596 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.597 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.597 [2024-12-06 13:11:41.106342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.854 [2024-12-06 13:11:41.267442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.854 [2024-12-06 13:11:41.267673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.854 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 BaseBdev2 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 [ 00:17:35.112 { 00:17:35.112 "name": "BaseBdev2", 00:17:35.112 "aliases": [ 00:17:35.112 "e3a56bd2-b28f-46e2-a764-295e321ca85d" 00:17:35.112 ], 00:17:35.112 "product_name": "Malloc disk", 00:17:35.112 "block_size": 512, 00:17:35.112 "num_blocks": 65536, 00:17:35.112 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:35.112 "assigned_rate_limits": { 00:17:35.112 "rw_ios_per_sec": 0, 00:17:35.112 "rw_mbytes_per_sec": 0, 00:17:35.112 "r_mbytes_per_sec": 0, 00:17:35.112 "w_mbytes_per_sec": 0 00:17:35.112 }, 00:17:35.112 "claimed": false, 00:17:35.112 "zoned": false, 00:17:35.112 "supported_io_types": { 00:17:35.112 "read": true, 00:17:35.112 "write": true, 00:17:35.112 "unmap": true, 00:17:35.112 "flush": true, 00:17:35.112 "reset": true, 00:17:35.112 "nvme_admin": false, 00:17:35.112 "nvme_io": false, 00:17:35.112 "nvme_io_md": false, 00:17:35.112 "write_zeroes": true, 00:17:35.112 "zcopy": true, 00:17:35.112 "get_zone_info": false, 00:17:35.112 "zone_management": false, 00:17:35.112 "zone_append": false, 00:17:35.112 "compare": false, 00:17:35.112 "compare_and_write": false, 00:17:35.112 "abort": true, 00:17:35.112 "seek_hole": false, 00:17:35.112 "seek_data": false, 00:17:35.112 "copy": true, 00:17:35.112 "nvme_iov_md": false 00:17:35.112 }, 00:17:35.112 "memory_domains": [ 00:17:35.112 { 00:17:35.112 "dma_device_id": "system", 00:17:35.112 "dma_device_type": 1 00:17:35.112 }, 00:17:35.112 { 00:17:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.112 "dma_device_type": 2 00:17:35.112 } 00:17:35.112 ], 00:17:35.112 "driver_specific": {} 00:17:35.112 } 00:17:35.112 ] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 BaseBdev3 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.112 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.112 [ 00:17:35.112 { 00:17:35.112 "name": "BaseBdev3", 00:17:35.112 "aliases": [ 00:17:35.112 "10dd0d67-a7e1-49c6-b214-b89976928eec" 00:17:35.112 ], 00:17:35.112 "product_name": "Malloc disk", 00:17:35.112 "block_size": 512, 00:17:35.112 "num_blocks": 65536, 00:17:35.112 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:35.112 "assigned_rate_limits": { 00:17:35.112 "rw_ios_per_sec": 0, 00:17:35.112 "rw_mbytes_per_sec": 0, 00:17:35.112 "r_mbytes_per_sec": 0, 00:17:35.112 "w_mbytes_per_sec": 0 00:17:35.112 }, 00:17:35.112 "claimed": false, 00:17:35.112 "zoned": false, 00:17:35.112 "supported_io_types": { 00:17:35.112 "read": true, 00:17:35.112 "write": true, 00:17:35.112 "unmap": true, 00:17:35.112 "flush": true, 00:17:35.112 "reset": true, 00:17:35.112 "nvme_admin": false, 00:17:35.112 "nvme_io": false, 00:17:35.112 "nvme_io_md": false, 00:17:35.112 "write_zeroes": true, 00:17:35.112 "zcopy": true, 00:17:35.112 "get_zone_info": false, 00:17:35.112 "zone_management": false, 00:17:35.112 "zone_append": false, 00:17:35.112 "compare": false, 00:17:35.112 "compare_and_write": false, 00:17:35.112 "abort": true, 00:17:35.112 "seek_hole": false, 00:17:35.112 "seek_data": false, 00:17:35.112 "copy": true, 00:17:35.112 "nvme_iov_md": false 00:17:35.112 }, 00:17:35.112 "memory_domains": [ 00:17:35.112 { 00:17:35.112 "dma_device_id": "system", 00:17:35.112 "dma_device_type": 1 00:17:35.112 }, 00:17:35.112 { 00:17:35.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.113 "dma_device_type": 2 00:17:35.113 } 00:17:35.113 ], 00:17:35.113 "driver_specific": {} 00:17:35.113 } 00:17:35.113 ] 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.113 BaseBdev4 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.113 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.378 [ 00:17:35.379 { 00:17:35.379 "name": "BaseBdev4", 00:17:35.379 "aliases": [ 00:17:35.379 "68681dfb-4106-4cee-9f01-5f4a63dbc655" 00:17:35.379 ], 00:17:35.379 "product_name": "Malloc disk", 00:17:35.379 "block_size": 512, 00:17:35.379 "num_blocks": 65536, 00:17:35.379 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:35.379 "assigned_rate_limits": { 00:17:35.379 "rw_ios_per_sec": 0, 00:17:35.379 "rw_mbytes_per_sec": 0, 00:17:35.379 "r_mbytes_per_sec": 0, 00:17:35.379 "w_mbytes_per_sec": 0 00:17:35.379 }, 00:17:35.379 "claimed": false, 00:17:35.379 "zoned": false, 00:17:35.379 "supported_io_types": { 00:17:35.379 "read": true, 00:17:35.379 "write": true, 00:17:35.379 "unmap": true, 00:17:35.379 "flush": true, 00:17:35.379 "reset": true, 00:17:35.379 "nvme_admin": false, 00:17:35.379 "nvme_io": false, 00:17:35.379 "nvme_io_md": false, 00:17:35.379 "write_zeroes": true, 00:17:35.379 "zcopy": true, 00:17:35.379 "get_zone_info": false, 00:17:35.379 "zone_management": false, 00:17:35.379 "zone_append": false, 00:17:35.379 "compare": false, 00:17:35.379 "compare_and_write": false, 00:17:35.379 "abort": true, 00:17:35.379 "seek_hole": false, 00:17:35.379 "seek_data": false, 00:17:35.379 "copy": true, 00:17:35.379 "nvme_iov_md": false 00:17:35.379 }, 00:17:35.379 "memory_domains": [ 00:17:35.379 { 00:17:35.379 "dma_device_id": "system", 00:17:35.379 "dma_device_type": 1 00:17:35.379 }, 00:17:35.379 { 00:17:35.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.379 "dma_device_type": 2 00:17:35.379 } 00:17:35.379 ], 00:17:35.379 "driver_specific": {} 00:17:35.379 } 00:17:35.379 ] 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.379 [2024-12-06 13:11:41.664602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.379 [2024-12-06 13:11:41.664826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.379 [2024-12-06 13:11:41.664890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.379 [2024-12-06 13:11:41.667672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:35.379 [2024-12-06 13:11:41.667759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.379 "name": "Existed_Raid", 00:17:35.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.379 "strip_size_kb": 64, 00:17:35.379 "state": "configuring", 00:17:35.379 "raid_level": "concat", 00:17:35.379 "superblock": false, 00:17:35.379 "num_base_bdevs": 4, 00:17:35.379 "num_base_bdevs_discovered": 3, 00:17:35.379 "num_base_bdevs_operational": 4, 00:17:35.379 "base_bdevs_list": [ 00:17:35.379 { 00:17:35.379 "name": "BaseBdev1", 00:17:35.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.379 "is_configured": false, 00:17:35.379 "data_offset": 0, 00:17:35.379 "data_size": 0 00:17:35.379 }, 00:17:35.379 { 00:17:35.379 "name": "BaseBdev2", 00:17:35.379 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:35.379 "is_configured": true, 00:17:35.379 "data_offset": 0, 00:17:35.379 "data_size": 65536 00:17:35.379 }, 00:17:35.379 { 00:17:35.379 "name": "BaseBdev3", 00:17:35.379 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:35.379 "is_configured": true, 00:17:35.379 "data_offset": 0, 00:17:35.379 "data_size": 65536 00:17:35.379 }, 00:17:35.379 { 00:17:35.379 "name": "BaseBdev4", 00:17:35.379 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:35.379 "is_configured": true, 00:17:35.379 "data_offset": 0, 00:17:35.379 "data_size": 65536 00:17:35.379 } 00:17:35.379 ] 00:17:35.379 }' 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.379 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.945 [2024-12-06 13:11:42.192735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.945 "name": "Existed_Raid", 00:17:35.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.945 "strip_size_kb": 64, 00:17:35.945 "state": "configuring", 00:17:35.945 "raid_level": "concat", 00:17:35.945 "superblock": false, 00:17:35.945 "num_base_bdevs": 4, 00:17:35.945 "num_base_bdevs_discovered": 2, 00:17:35.945 "num_base_bdevs_operational": 4, 00:17:35.945 "base_bdevs_list": [ 00:17:35.945 { 00:17:35.945 "name": "BaseBdev1", 00:17:35.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.945 "is_configured": false, 00:17:35.945 "data_offset": 0, 00:17:35.945 "data_size": 0 00:17:35.945 }, 00:17:35.945 { 00:17:35.945 "name": null, 00:17:35.945 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:35.945 "is_configured": false, 00:17:35.945 "data_offset": 0, 00:17:35.945 "data_size": 65536 00:17:35.945 }, 00:17:35.945 { 00:17:35.945 "name": "BaseBdev3", 00:17:35.945 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:35.945 "is_configured": true, 00:17:35.945 "data_offset": 0, 00:17:35.945 "data_size": 65536 00:17:35.945 }, 00:17:35.945 { 00:17:35.945 "name": "BaseBdev4", 00:17:35.945 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:35.945 "is_configured": true, 00:17:35.945 "data_offset": 0, 00:17:35.945 "data_size": 65536 00:17:35.945 } 00:17:35.945 ] 00:17:35.945 }' 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.945 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.204 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:36.204 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.204 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.204 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.204 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.462 [2024-12-06 13:11:42.798233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.462 BaseBdev1 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.462 [ 00:17:36.462 { 00:17:36.462 "name": "BaseBdev1", 00:17:36.462 "aliases": [ 00:17:36.462 "71d075fb-dd89-42ca-850e-b171f58459c6" 00:17:36.462 ], 00:17:36.462 "product_name": "Malloc disk", 00:17:36.462 "block_size": 512, 00:17:36.462 "num_blocks": 65536, 00:17:36.462 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:36.462 "assigned_rate_limits": { 00:17:36.462 "rw_ios_per_sec": 0, 00:17:36.462 "rw_mbytes_per_sec": 0, 00:17:36.462 "r_mbytes_per_sec": 0, 00:17:36.462 "w_mbytes_per_sec": 0 00:17:36.462 }, 00:17:36.462 "claimed": true, 00:17:36.462 "claim_type": "exclusive_write", 00:17:36.462 "zoned": false, 00:17:36.462 "supported_io_types": { 00:17:36.462 "read": true, 00:17:36.462 "write": true, 00:17:36.462 "unmap": true, 00:17:36.462 "flush": true, 00:17:36.462 "reset": true, 00:17:36.462 "nvme_admin": false, 00:17:36.462 "nvme_io": false, 00:17:36.462 "nvme_io_md": false, 00:17:36.462 "write_zeroes": true, 00:17:36.462 "zcopy": true, 00:17:36.462 "get_zone_info": false, 00:17:36.462 "zone_management": false, 00:17:36.462 "zone_append": false, 00:17:36.462 "compare": false, 00:17:36.462 "compare_and_write": false, 00:17:36.462 "abort": true, 00:17:36.462 "seek_hole": false, 00:17:36.462 "seek_data": false, 00:17:36.462 "copy": true, 00:17:36.462 "nvme_iov_md": false 00:17:36.462 }, 00:17:36.462 "memory_domains": [ 00:17:36.462 { 00:17:36.462 "dma_device_id": "system", 00:17:36.462 "dma_device_type": 1 00:17:36.462 }, 00:17:36.462 { 00:17:36.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.462 "dma_device_type": 2 00:17:36.462 } 00:17:36.462 ], 00:17:36.462 "driver_specific": {} 00:17:36.462 } 00:17:36.462 ] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.462 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.463 "name": "Existed_Raid", 00:17:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.463 "strip_size_kb": 64, 00:17:36.463 "state": "configuring", 00:17:36.463 "raid_level": "concat", 00:17:36.463 "superblock": false, 00:17:36.463 "num_base_bdevs": 4, 00:17:36.463 "num_base_bdevs_discovered": 3, 00:17:36.463 "num_base_bdevs_operational": 4, 00:17:36.463 "base_bdevs_list": [ 00:17:36.463 { 00:17:36.463 "name": "BaseBdev1", 00:17:36.463 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:36.463 "is_configured": true, 00:17:36.463 "data_offset": 0, 00:17:36.463 "data_size": 65536 00:17:36.463 }, 00:17:36.463 { 00:17:36.463 "name": null, 00:17:36.463 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:36.463 "is_configured": false, 00:17:36.463 "data_offset": 0, 00:17:36.463 "data_size": 65536 00:17:36.463 }, 00:17:36.463 { 00:17:36.463 "name": "BaseBdev3", 00:17:36.463 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:36.463 "is_configured": true, 00:17:36.463 "data_offset": 0, 00:17:36.463 "data_size": 65536 00:17:36.463 }, 00:17:36.463 { 00:17:36.463 "name": "BaseBdev4", 00:17:36.463 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:36.463 "is_configured": true, 00:17:36.463 "data_offset": 0, 00:17:36.463 "data_size": 65536 00:17:36.463 } 00:17:36.463 ] 00:17:36.463 }' 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.463 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 [2024-12-06 13:11:43.366570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.029 "name": "Existed_Raid", 00:17:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.029 "strip_size_kb": 64, 00:17:37.029 "state": "configuring", 00:17:37.029 "raid_level": "concat", 00:17:37.029 "superblock": false, 00:17:37.029 "num_base_bdevs": 4, 00:17:37.029 "num_base_bdevs_discovered": 2, 00:17:37.029 "num_base_bdevs_operational": 4, 00:17:37.029 "base_bdevs_list": [ 00:17:37.029 { 00:17:37.029 "name": "BaseBdev1", 00:17:37.029 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:37.029 "is_configured": true, 00:17:37.029 "data_offset": 0, 00:17:37.029 "data_size": 65536 00:17:37.029 }, 00:17:37.029 { 00:17:37.029 "name": null, 00:17:37.029 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:37.029 "is_configured": false, 00:17:37.029 "data_offset": 0, 00:17:37.029 "data_size": 65536 00:17:37.029 }, 00:17:37.029 { 00:17:37.029 "name": null, 00:17:37.029 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:37.029 "is_configured": false, 00:17:37.029 "data_offset": 0, 00:17:37.029 "data_size": 65536 00:17:37.029 }, 00:17:37.029 { 00:17:37.029 "name": "BaseBdev4", 00:17:37.029 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:37.029 "is_configured": true, 00:17:37.029 "data_offset": 0, 00:17:37.029 "data_size": 65536 00:17:37.029 } 00:17:37.029 ] 00:17:37.029 }' 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.029 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 [2024-12-06 13:11:43.958699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.596 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.596 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.596 "name": "Existed_Raid", 00:17:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.596 "strip_size_kb": 64, 00:17:37.596 "state": "configuring", 00:17:37.596 "raid_level": "concat", 00:17:37.596 "superblock": false, 00:17:37.596 "num_base_bdevs": 4, 00:17:37.596 "num_base_bdevs_discovered": 3, 00:17:37.596 "num_base_bdevs_operational": 4, 00:17:37.596 "base_bdevs_list": [ 00:17:37.596 { 00:17:37.596 "name": "BaseBdev1", 00:17:37.596 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:37.596 "is_configured": true, 00:17:37.596 "data_offset": 0, 00:17:37.596 "data_size": 65536 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": null, 00:17:37.596 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:37.596 "is_configured": false, 00:17:37.596 "data_offset": 0, 00:17:37.596 "data_size": 65536 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": "BaseBdev3", 00:17:37.596 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:37.596 "is_configured": true, 00:17:37.596 "data_offset": 0, 00:17:37.596 "data_size": 65536 00:17:37.596 }, 00:17:37.596 { 00:17:37.596 "name": "BaseBdev4", 00:17:37.596 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:37.596 "is_configured": true, 00:17:37.596 "data_offset": 0, 00:17:37.596 "data_size": 65536 00:17:37.596 } 00:17:37.596 ] 00:17:37.596 }' 00:17:37.596 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.596 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.163 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.163 [2024-12-06 13:11:44.586929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.422 "name": "Existed_Raid", 00:17:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.422 "strip_size_kb": 64, 00:17:38.422 "state": "configuring", 00:17:38.422 "raid_level": "concat", 00:17:38.422 "superblock": false, 00:17:38.422 "num_base_bdevs": 4, 00:17:38.422 "num_base_bdevs_discovered": 2, 00:17:38.422 "num_base_bdevs_operational": 4, 00:17:38.422 "base_bdevs_list": [ 00:17:38.422 { 00:17:38.422 "name": null, 00:17:38.422 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:38.422 "is_configured": false, 00:17:38.422 "data_offset": 0, 00:17:38.422 "data_size": 65536 00:17:38.422 }, 00:17:38.422 { 00:17:38.422 "name": null, 00:17:38.422 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:38.422 "is_configured": false, 00:17:38.422 "data_offset": 0, 00:17:38.422 "data_size": 65536 00:17:38.422 }, 00:17:38.422 { 00:17:38.422 "name": "BaseBdev3", 00:17:38.422 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:38.422 "is_configured": true, 00:17:38.422 "data_offset": 0, 00:17:38.422 "data_size": 65536 00:17:38.422 }, 00:17:38.422 { 00:17:38.422 "name": "BaseBdev4", 00:17:38.422 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:38.422 "is_configured": true, 00:17:38.422 "data_offset": 0, 00:17:38.422 "data_size": 65536 00:17:38.422 } 00:17:38.422 ] 00:17:38.422 }' 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.422 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 [2024-12-06 13:11:45.278305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.989 "name": "Existed_Raid", 00:17:38.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.989 "strip_size_kb": 64, 00:17:38.989 "state": "configuring", 00:17:38.989 "raid_level": "concat", 00:17:38.989 "superblock": false, 00:17:38.989 "num_base_bdevs": 4, 00:17:38.989 "num_base_bdevs_discovered": 3, 00:17:38.989 "num_base_bdevs_operational": 4, 00:17:38.989 "base_bdevs_list": [ 00:17:38.989 { 00:17:38.989 "name": null, 00:17:38.989 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:38.989 "is_configured": false, 00:17:38.989 "data_offset": 0, 00:17:38.989 "data_size": 65536 00:17:38.989 }, 00:17:38.989 { 00:17:38.989 "name": "BaseBdev2", 00:17:38.989 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:38.989 "is_configured": true, 00:17:38.989 "data_offset": 0, 00:17:38.989 "data_size": 65536 00:17:38.989 }, 00:17:38.989 { 00:17:38.989 "name": "BaseBdev3", 00:17:38.989 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:38.989 "is_configured": true, 00:17:38.989 "data_offset": 0, 00:17:38.989 "data_size": 65536 00:17:38.989 }, 00:17:38.989 { 00:17:38.989 "name": "BaseBdev4", 00:17:38.989 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:38.989 "is_configured": true, 00:17:38.989 "data_offset": 0, 00:17:38.989 "data_size": 65536 00:17:38.989 } 00:17:38.989 ] 00:17:38.989 }' 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.989 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71d075fb-dd89-42ca-850e-b171f58459c6 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 [2024-12-06 13:11:45.948413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:39.556 [2024-12-06 13:11:45.948492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:39.556 [2024-12-06 13:11:45.948507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:39.556 [2024-12-06 13:11:45.948848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:39.556 [2024-12-06 13:11:45.949026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:39.556 [2024-12-06 13:11:45.949045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:39.556 [2024-12-06 13:11:45.949322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.556 NewBaseBdev 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.556 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.556 [ 00:17:39.556 { 00:17:39.556 "name": "NewBaseBdev", 00:17:39.556 "aliases": [ 00:17:39.556 "71d075fb-dd89-42ca-850e-b171f58459c6" 00:17:39.556 ], 00:17:39.556 "product_name": "Malloc disk", 00:17:39.556 "block_size": 512, 00:17:39.556 "num_blocks": 65536, 00:17:39.556 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:39.556 "assigned_rate_limits": { 00:17:39.556 "rw_ios_per_sec": 0, 00:17:39.556 "rw_mbytes_per_sec": 0, 00:17:39.556 "r_mbytes_per_sec": 0, 00:17:39.556 "w_mbytes_per_sec": 0 00:17:39.556 }, 00:17:39.556 "claimed": true, 00:17:39.556 "claim_type": "exclusive_write", 00:17:39.556 "zoned": false, 00:17:39.556 "supported_io_types": { 00:17:39.556 "read": true, 00:17:39.556 "write": true, 00:17:39.556 "unmap": true, 00:17:39.556 "flush": true, 00:17:39.556 "reset": true, 00:17:39.557 "nvme_admin": false, 00:17:39.557 "nvme_io": false, 00:17:39.557 "nvme_io_md": false, 00:17:39.557 "write_zeroes": true, 00:17:39.557 "zcopy": true, 00:17:39.557 "get_zone_info": false, 00:17:39.557 "zone_management": false, 00:17:39.557 "zone_append": false, 00:17:39.557 "compare": false, 00:17:39.557 "compare_and_write": false, 00:17:39.557 "abort": true, 00:17:39.557 "seek_hole": false, 00:17:39.557 "seek_data": false, 00:17:39.557 "copy": true, 00:17:39.557 "nvme_iov_md": false 00:17:39.557 }, 00:17:39.557 "memory_domains": [ 00:17:39.557 { 00:17:39.557 "dma_device_id": "system", 00:17:39.557 "dma_device_type": 1 00:17:39.557 }, 00:17:39.557 { 00:17:39.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.557 "dma_device_type": 2 00:17:39.557 } 00:17:39.557 ], 00:17:39.557 "driver_specific": {} 00:17:39.557 } 00:17:39.557 ] 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.557 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.557 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.557 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.557 "name": "Existed_Raid", 00:17:39.557 "uuid": "6310473f-d957-41c3-9dd9-0752202af1ca", 00:17:39.557 "strip_size_kb": 64, 00:17:39.557 "state": "online", 00:17:39.557 "raid_level": "concat", 00:17:39.557 "superblock": false, 00:17:39.557 "num_base_bdevs": 4, 00:17:39.557 "num_base_bdevs_discovered": 4, 00:17:39.557 "num_base_bdevs_operational": 4, 00:17:39.557 "base_bdevs_list": [ 00:17:39.557 { 00:17:39.557 "name": "NewBaseBdev", 00:17:39.557 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:39.557 "is_configured": true, 00:17:39.557 "data_offset": 0, 00:17:39.557 "data_size": 65536 00:17:39.557 }, 00:17:39.557 { 00:17:39.557 "name": "BaseBdev2", 00:17:39.557 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:39.557 "is_configured": true, 00:17:39.557 "data_offset": 0, 00:17:39.557 "data_size": 65536 00:17:39.557 }, 00:17:39.557 { 00:17:39.557 "name": "BaseBdev3", 00:17:39.557 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:39.557 "is_configured": true, 00:17:39.557 "data_offset": 0, 00:17:39.557 "data_size": 65536 00:17:39.557 }, 00:17:39.557 { 00:17:39.557 "name": "BaseBdev4", 00:17:39.557 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:39.557 "is_configured": true, 00:17:39.557 "data_offset": 0, 00:17:39.557 "data_size": 65536 00:17:39.557 } 00:17:39.557 ] 00:17:39.557 }' 00:17:39.557 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.557 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.123 [2024-12-06 13:11:46.485351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.123 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:40.123 "name": "Existed_Raid", 00:17:40.123 "aliases": [ 00:17:40.123 "6310473f-d957-41c3-9dd9-0752202af1ca" 00:17:40.123 ], 00:17:40.123 "product_name": "Raid Volume", 00:17:40.123 "block_size": 512, 00:17:40.123 "num_blocks": 262144, 00:17:40.123 "uuid": "6310473f-d957-41c3-9dd9-0752202af1ca", 00:17:40.123 "assigned_rate_limits": { 00:17:40.123 "rw_ios_per_sec": 0, 00:17:40.123 "rw_mbytes_per_sec": 0, 00:17:40.123 "r_mbytes_per_sec": 0, 00:17:40.123 "w_mbytes_per_sec": 0 00:17:40.123 }, 00:17:40.123 "claimed": false, 00:17:40.123 "zoned": false, 00:17:40.123 "supported_io_types": { 00:17:40.123 "read": true, 00:17:40.123 "write": true, 00:17:40.123 "unmap": true, 00:17:40.123 "flush": true, 00:17:40.123 "reset": true, 00:17:40.123 "nvme_admin": false, 00:17:40.123 "nvme_io": false, 00:17:40.123 "nvme_io_md": false, 00:17:40.123 "write_zeroes": true, 00:17:40.123 "zcopy": false, 00:17:40.123 "get_zone_info": false, 00:17:40.123 "zone_management": false, 00:17:40.123 "zone_append": false, 00:17:40.123 "compare": false, 00:17:40.123 "compare_and_write": false, 00:17:40.123 "abort": false, 00:17:40.123 "seek_hole": false, 00:17:40.123 "seek_data": false, 00:17:40.123 "copy": false, 00:17:40.123 "nvme_iov_md": false 00:17:40.123 }, 00:17:40.123 "memory_domains": [ 00:17:40.123 { 00:17:40.123 "dma_device_id": "system", 00:17:40.123 "dma_device_type": 1 00:17:40.123 }, 00:17:40.123 { 00:17:40.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.123 "dma_device_type": 2 00:17:40.123 }, 00:17:40.123 { 00:17:40.123 "dma_device_id": "system", 00:17:40.123 "dma_device_type": 1 00:17:40.123 }, 00:17:40.123 { 00:17:40.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.123 "dma_device_type": 2 00:17:40.123 }, 00:17:40.123 { 00:17:40.123 "dma_device_id": "system", 00:17:40.123 "dma_device_type": 1 00:17:40.123 }, 00:17:40.123 { 00:17:40.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.124 "dma_device_type": 2 00:17:40.124 }, 00:17:40.124 { 00:17:40.124 "dma_device_id": "system", 00:17:40.124 "dma_device_type": 1 00:17:40.124 }, 00:17:40.124 { 00:17:40.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.124 "dma_device_type": 2 00:17:40.124 } 00:17:40.124 ], 00:17:40.124 "driver_specific": { 00:17:40.124 "raid": { 00:17:40.124 "uuid": "6310473f-d957-41c3-9dd9-0752202af1ca", 00:17:40.124 "strip_size_kb": 64, 00:17:40.124 "state": "online", 00:17:40.124 "raid_level": "concat", 00:17:40.124 "superblock": false, 00:17:40.124 "num_base_bdevs": 4, 00:17:40.124 "num_base_bdevs_discovered": 4, 00:17:40.124 "num_base_bdevs_operational": 4, 00:17:40.124 "base_bdevs_list": [ 00:17:40.124 { 00:17:40.124 "name": "NewBaseBdev", 00:17:40.124 "uuid": "71d075fb-dd89-42ca-850e-b171f58459c6", 00:17:40.124 "is_configured": true, 00:17:40.124 "data_offset": 0, 00:17:40.124 "data_size": 65536 00:17:40.124 }, 00:17:40.124 { 00:17:40.124 "name": "BaseBdev2", 00:17:40.124 "uuid": "e3a56bd2-b28f-46e2-a764-295e321ca85d", 00:17:40.124 "is_configured": true, 00:17:40.124 "data_offset": 0, 00:17:40.124 "data_size": 65536 00:17:40.124 }, 00:17:40.124 { 00:17:40.124 "name": "BaseBdev3", 00:17:40.124 "uuid": "10dd0d67-a7e1-49c6-b214-b89976928eec", 00:17:40.124 "is_configured": true, 00:17:40.124 "data_offset": 0, 00:17:40.124 "data_size": 65536 00:17:40.124 }, 00:17:40.124 { 00:17:40.124 "name": "BaseBdev4", 00:17:40.124 "uuid": "68681dfb-4106-4cee-9f01-5f4a63dbc655", 00:17:40.124 "is_configured": true, 00:17:40.124 "data_offset": 0, 00:17:40.124 "data_size": 65536 00:17:40.124 } 00:17:40.124 ] 00:17:40.124 } 00:17:40.124 } 00:17:40.124 }' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:40.124 BaseBdev2 00:17:40.124 BaseBdev3 00:17:40.124 BaseBdev4' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.124 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.383 [2024-12-06 13:11:46.852842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.383 [2024-12-06 13:11:46.852897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.383 [2024-12-06 13:11:46.853029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.383 [2024-12-06 13:11:46.853144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.383 [2024-12-06 13:11:46.853171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71667 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71667 ']' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71667 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71667 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.383 killing process with pid 71667 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71667' 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71667 00:17:40.383 [2024-12-06 13:11:46.888347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.383 13:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71667 00:17:40.950 [2024-12-06 13:11:47.281439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:42.323 00:17:42.323 real 0m13.288s 00:17:42.323 user 0m21.717s 00:17:42.323 sys 0m1.894s 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.323 ************************************ 00:17:42.323 END TEST raid_state_function_test 00:17:42.323 ************************************ 00:17:42.323 13:11:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:42.323 13:11:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:42.323 13:11:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.323 13:11:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.323 ************************************ 00:17:42.323 START TEST raid_state_function_test_sb 00:17:42.323 ************************************ 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72355 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:42.323 Process raid pid: 72355 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72355' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72355 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72355 ']' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.323 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.323 [2024-12-06 13:11:48.726237] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:42.323 [2024-12-06 13:11:48.726478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.590 [2024-12-06 13:11:48.910759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.590 [2024-12-06 13:11:49.071082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.851 [2024-12-06 13:11:49.304554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.851 [2024-12-06 13:11:49.304615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.417 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.417 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:43.417 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.418 [2024-12-06 13:11:49.699928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.418 [2024-12-06 13:11:49.700017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.418 [2024-12-06 13:11:49.700036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.418 [2024-12-06 13:11:49.700054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.418 [2024-12-06 13:11:49.700065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.418 [2024-12-06 13:11:49.700082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.418 [2024-12-06 13:11:49.700092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.418 [2024-12-06 13:11:49.700108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.418 "name": "Existed_Raid", 00:17:43.418 "uuid": "7c377794-0d1d-4231-907c-3984a3aec3b9", 00:17:43.418 "strip_size_kb": 64, 00:17:43.418 "state": "configuring", 00:17:43.418 "raid_level": "concat", 00:17:43.418 "superblock": true, 00:17:43.418 "num_base_bdevs": 4, 00:17:43.418 "num_base_bdevs_discovered": 0, 00:17:43.418 "num_base_bdevs_operational": 4, 00:17:43.418 "base_bdevs_list": [ 00:17:43.418 { 00:17:43.418 "name": "BaseBdev1", 00:17:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.418 "is_configured": false, 00:17:43.418 "data_offset": 0, 00:17:43.418 "data_size": 0 00:17:43.418 }, 00:17:43.418 { 00:17:43.418 "name": "BaseBdev2", 00:17:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.418 "is_configured": false, 00:17:43.418 "data_offset": 0, 00:17:43.418 "data_size": 0 00:17:43.418 }, 00:17:43.418 { 00:17:43.418 "name": "BaseBdev3", 00:17:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.418 "is_configured": false, 00:17:43.418 "data_offset": 0, 00:17:43.418 "data_size": 0 00:17:43.418 }, 00:17:43.418 { 00:17:43.418 "name": "BaseBdev4", 00:17:43.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.418 "is_configured": false, 00:17:43.418 "data_offset": 0, 00:17:43.418 "data_size": 0 00:17:43.418 } 00:17:43.418 ] 00:17:43.418 }' 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.418 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 [2024-12-06 13:11:50.219987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.985 [2024-12-06 13:11:50.220044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 [2024-12-06 13:11:50.227959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.985 [2024-12-06 13:11:50.228023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.985 [2024-12-06 13:11:50.228041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.985 [2024-12-06 13:11:50.228058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.985 [2024-12-06 13:11:50.228069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.985 [2024-12-06 13:11:50.228084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.985 [2024-12-06 13:11:50.228094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.985 [2024-12-06 13:11:50.228109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 [2024-12-06 13:11:50.276605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.985 BaseBdev1 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.985 [ 00:17:43.985 { 00:17:43.985 "name": "BaseBdev1", 00:17:43.985 "aliases": [ 00:17:43.985 "dbb5cb23-d969-4366-8821-ceda22e53bef" 00:17:43.985 ], 00:17:43.985 "product_name": "Malloc disk", 00:17:43.985 "block_size": 512, 00:17:43.985 "num_blocks": 65536, 00:17:43.985 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:43.985 "assigned_rate_limits": { 00:17:43.985 "rw_ios_per_sec": 0, 00:17:43.985 "rw_mbytes_per_sec": 0, 00:17:43.985 "r_mbytes_per_sec": 0, 00:17:43.985 "w_mbytes_per_sec": 0 00:17:43.985 }, 00:17:43.985 "claimed": true, 00:17:43.985 "claim_type": "exclusive_write", 00:17:43.985 "zoned": false, 00:17:43.985 "supported_io_types": { 00:17:43.985 "read": true, 00:17:43.985 "write": true, 00:17:43.985 "unmap": true, 00:17:43.985 "flush": true, 00:17:43.985 "reset": true, 00:17:43.985 "nvme_admin": false, 00:17:43.985 "nvme_io": false, 00:17:43.985 "nvme_io_md": false, 00:17:43.985 "write_zeroes": true, 00:17:43.985 "zcopy": true, 00:17:43.985 "get_zone_info": false, 00:17:43.985 "zone_management": false, 00:17:43.985 "zone_append": false, 00:17:43.985 "compare": false, 00:17:43.985 "compare_and_write": false, 00:17:43.985 "abort": true, 00:17:43.985 "seek_hole": false, 00:17:43.985 "seek_data": false, 00:17:43.985 "copy": true, 00:17:43.985 "nvme_iov_md": false 00:17:43.985 }, 00:17:43.985 "memory_domains": [ 00:17:43.985 { 00:17:43.985 "dma_device_id": "system", 00:17:43.985 "dma_device_type": 1 00:17:43.985 }, 00:17:43.985 { 00:17:43.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.985 "dma_device_type": 2 00:17:43.985 } 00:17:43.985 ], 00:17:43.985 "driver_specific": {} 00:17:43.985 } 00:17:43.985 ] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.985 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.986 "name": "Existed_Raid", 00:17:43.986 "uuid": "d725e191-c259-495c-bd73-49db4098e611", 00:17:43.986 "strip_size_kb": 64, 00:17:43.986 "state": "configuring", 00:17:43.986 "raid_level": "concat", 00:17:43.986 "superblock": true, 00:17:43.986 "num_base_bdevs": 4, 00:17:43.986 "num_base_bdevs_discovered": 1, 00:17:43.986 "num_base_bdevs_operational": 4, 00:17:43.986 "base_bdevs_list": [ 00:17:43.986 { 00:17:43.986 "name": "BaseBdev1", 00:17:43.986 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:43.986 "is_configured": true, 00:17:43.986 "data_offset": 2048, 00:17:43.986 "data_size": 63488 00:17:43.986 }, 00:17:43.986 { 00:17:43.986 "name": "BaseBdev2", 00:17:43.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.986 "is_configured": false, 00:17:43.986 "data_offset": 0, 00:17:43.986 "data_size": 0 00:17:43.986 }, 00:17:43.986 { 00:17:43.986 "name": "BaseBdev3", 00:17:43.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.986 "is_configured": false, 00:17:43.986 "data_offset": 0, 00:17:43.986 "data_size": 0 00:17:43.986 }, 00:17:43.986 { 00:17:43.986 "name": "BaseBdev4", 00:17:43.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.986 "is_configured": false, 00:17:43.986 "data_offset": 0, 00:17:43.986 "data_size": 0 00:17:43.986 } 00:17:43.986 ] 00:17:43.986 }' 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.986 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.552 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.552 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 [2024-12-06 13:11:50.788821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.553 [2024-12-06 13:11:50.788900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 [2024-12-06 13:11:50.796913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.553 [2024-12-06 13:11:50.799601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.553 [2024-12-06 13:11:50.799673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.553 [2024-12-06 13:11:50.799692] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.553 [2024-12-06 13:11:50.799711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.553 [2024-12-06 13:11:50.799722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.553 [2024-12-06 13:11:50.799737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.553 "name": "Existed_Raid", 00:17:44.553 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:44.553 "strip_size_kb": 64, 00:17:44.553 "state": "configuring", 00:17:44.553 "raid_level": "concat", 00:17:44.553 "superblock": true, 00:17:44.553 "num_base_bdevs": 4, 00:17:44.553 "num_base_bdevs_discovered": 1, 00:17:44.553 "num_base_bdevs_operational": 4, 00:17:44.553 "base_bdevs_list": [ 00:17:44.553 { 00:17:44.553 "name": "BaseBdev1", 00:17:44.553 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:44.553 "is_configured": true, 00:17:44.553 "data_offset": 2048, 00:17:44.553 "data_size": 63488 00:17:44.553 }, 00:17:44.553 { 00:17:44.553 "name": "BaseBdev2", 00:17:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.553 "is_configured": false, 00:17:44.553 "data_offset": 0, 00:17:44.553 "data_size": 0 00:17:44.553 }, 00:17:44.553 { 00:17:44.553 "name": "BaseBdev3", 00:17:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.553 "is_configured": false, 00:17:44.553 "data_offset": 0, 00:17:44.553 "data_size": 0 00:17:44.553 }, 00:17:44.553 { 00:17:44.553 "name": "BaseBdev4", 00:17:44.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.553 "is_configured": false, 00:17:44.553 "data_offset": 0, 00:17:44.553 "data_size": 0 00:17:44.553 } 00:17:44.553 ] 00:17:44.553 }' 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.553 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.811 [2024-12-06 13:11:51.306849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.811 BaseBdev2 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.811 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.811 [ 00:17:44.811 { 00:17:44.811 "name": "BaseBdev2", 00:17:44.811 "aliases": [ 00:17:44.811 "585416e2-2c44-44c9-b462-2fa543267219" 00:17:44.811 ], 00:17:44.811 "product_name": "Malloc disk", 00:17:44.811 "block_size": 512, 00:17:44.811 "num_blocks": 65536, 00:17:44.811 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:44.811 "assigned_rate_limits": { 00:17:44.811 "rw_ios_per_sec": 0, 00:17:44.811 "rw_mbytes_per_sec": 0, 00:17:44.811 "r_mbytes_per_sec": 0, 00:17:44.811 "w_mbytes_per_sec": 0 00:17:44.811 }, 00:17:44.811 "claimed": true, 00:17:44.811 "claim_type": "exclusive_write", 00:17:44.811 "zoned": false, 00:17:44.811 "supported_io_types": { 00:17:44.811 "read": true, 00:17:44.811 "write": true, 00:17:44.811 "unmap": true, 00:17:44.811 "flush": true, 00:17:44.811 "reset": true, 00:17:44.811 "nvme_admin": false, 00:17:44.811 "nvme_io": false, 00:17:44.811 "nvme_io_md": false, 00:17:44.811 "write_zeroes": true, 00:17:44.811 "zcopy": true, 00:17:44.811 "get_zone_info": false, 00:17:44.811 "zone_management": false, 00:17:44.811 "zone_append": false, 00:17:44.811 "compare": false, 00:17:44.811 "compare_and_write": false, 00:17:44.811 "abort": true, 00:17:44.811 "seek_hole": false, 00:17:44.811 "seek_data": false, 00:17:44.811 "copy": true, 00:17:44.811 "nvme_iov_md": false 00:17:44.811 }, 00:17:44.811 "memory_domains": [ 00:17:44.811 { 00:17:44.811 "dma_device_id": "system", 00:17:44.811 "dma_device_type": 1 00:17:44.811 }, 00:17:44.811 { 00:17:44.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.811 "dma_device_type": 2 00:17:44.811 } 00:17:44.811 ], 00:17:44.811 "driver_specific": {} 00:17:45.068 } 00:17:45.068 ] 00:17:45.068 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.068 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:45.068 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.069 "name": "Existed_Raid", 00:17:45.069 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:45.069 "strip_size_kb": 64, 00:17:45.069 "state": "configuring", 00:17:45.069 "raid_level": "concat", 00:17:45.069 "superblock": true, 00:17:45.069 "num_base_bdevs": 4, 00:17:45.069 "num_base_bdevs_discovered": 2, 00:17:45.069 "num_base_bdevs_operational": 4, 00:17:45.069 "base_bdevs_list": [ 00:17:45.069 { 00:17:45.069 "name": "BaseBdev1", 00:17:45.069 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:45.069 "is_configured": true, 00:17:45.069 "data_offset": 2048, 00:17:45.069 "data_size": 63488 00:17:45.069 }, 00:17:45.069 { 00:17:45.069 "name": "BaseBdev2", 00:17:45.069 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:45.069 "is_configured": true, 00:17:45.069 "data_offset": 2048, 00:17:45.069 "data_size": 63488 00:17:45.069 }, 00:17:45.069 { 00:17:45.069 "name": "BaseBdev3", 00:17:45.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.069 "is_configured": false, 00:17:45.069 "data_offset": 0, 00:17:45.069 "data_size": 0 00:17:45.069 }, 00:17:45.069 { 00:17:45.069 "name": "BaseBdev4", 00:17:45.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.069 "is_configured": false, 00:17:45.069 "data_offset": 0, 00:17:45.069 "data_size": 0 00:17:45.069 } 00:17:45.069 ] 00:17:45.069 }' 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.069 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.327 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.327 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.327 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 [2024-12-06 13:11:51.890895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.586 BaseBdev3 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 [ 00:17:45.586 { 00:17:45.586 "name": "BaseBdev3", 00:17:45.586 "aliases": [ 00:17:45.586 "0b01a665-ce94-4f50-b823-280bf8c64d53" 00:17:45.586 ], 00:17:45.586 "product_name": "Malloc disk", 00:17:45.586 "block_size": 512, 00:17:45.586 "num_blocks": 65536, 00:17:45.586 "uuid": "0b01a665-ce94-4f50-b823-280bf8c64d53", 00:17:45.586 "assigned_rate_limits": { 00:17:45.586 "rw_ios_per_sec": 0, 00:17:45.586 "rw_mbytes_per_sec": 0, 00:17:45.586 "r_mbytes_per_sec": 0, 00:17:45.586 "w_mbytes_per_sec": 0 00:17:45.586 }, 00:17:45.586 "claimed": true, 00:17:45.586 "claim_type": "exclusive_write", 00:17:45.586 "zoned": false, 00:17:45.586 "supported_io_types": { 00:17:45.586 "read": true, 00:17:45.586 "write": true, 00:17:45.586 "unmap": true, 00:17:45.586 "flush": true, 00:17:45.586 "reset": true, 00:17:45.586 "nvme_admin": false, 00:17:45.586 "nvme_io": false, 00:17:45.586 "nvme_io_md": false, 00:17:45.586 "write_zeroes": true, 00:17:45.586 "zcopy": true, 00:17:45.586 "get_zone_info": false, 00:17:45.586 "zone_management": false, 00:17:45.586 "zone_append": false, 00:17:45.586 "compare": false, 00:17:45.586 "compare_and_write": false, 00:17:45.586 "abort": true, 00:17:45.586 "seek_hole": false, 00:17:45.586 "seek_data": false, 00:17:45.586 "copy": true, 00:17:45.586 "nvme_iov_md": false 00:17:45.586 }, 00:17:45.586 "memory_domains": [ 00:17:45.586 { 00:17:45.586 "dma_device_id": "system", 00:17:45.586 "dma_device_type": 1 00:17:45.586 }, 00:17:45.586 { 00:17:45.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.586 "dma_device_type": 2 00:17:45.586 } 00:17:45.586 ], 00:17:45.586 "driver_specific": {} 00:17:45.586 } 00:17:45.586 ] 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:45.586 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.587 "name": "Existed_Raid", 00:17:45.587 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:45.587 "strip_size_kb": 64, 00:17:45.587 "state": "configuring", 00:17:45.587 "raid_level": "concat", 00:17:45.587 "superblock": true, 00:17:45.587 "num_base_bdevs": 4, 00:17:45.587 "num_base_bdevs_discovered": 3, 00:17:45.587 "num_base_bdevs_operational": 4, 00:17:45.587 "base_bdevs_list": [ 00:17:45.587 { 00:17:45.587 "name": "BaseBdev1", 00:17:45.587 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:45.587 "is_configured": true, 00:17:45.587 "data_offset": 2048, 00:17:45.587 "data_size": 63488 00:17:45.587 }, 00:17:45.587 { 00:17:45.587 "name": "BaseBdev2", 00:17:45.587 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:45.587 "is_configured": true, 00:17:45.587 "data_offset": 2048, 00:17:45.587 "data_size": 63488 00:17:45.587 }, 00:17:45.587 { 00:17:45.587 "name": "BaseBdev3", 00:17:45.587 "uuid": "0b01a665-ce94-4f50-b823-280bf8c64d53", 00:17:45.587 "is_configured": true, 00:17:45.587 "data_offset": 2048, 00:17:45.587 "data_size": 63488 00:17:45.587 }, 00:17:45.587 { 00:17:45.587 "name": "BaseBdev4", 00:17:45.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.587 "is_configured": false, 00:17:45.587 "data_offset": 0, 00:17:45.587 "data_size": 0 00:17:45.587 } 00:17:45.587 ] 00:17:45.587 }' 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.587 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 [2024-12-06 13:11:52.453429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.153 [2024-12-06 13:11:52.454160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.153 [2024-12-06 13:11:52.454189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:46.153 [2024-12-06 13:11:52.454598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:46.153 BaseBdev4 00:17:46.153 [2024-12-06 13:11:52.454814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.153 [2024-12-06 13:11:52.454836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.153 [2024-12-06 13:11:52.455027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.153 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 [ 00:17:46.153 { 00:17:46.153 "name": "BaseBdev4", 00:17:46.153 "aliases": [ 00:17:46.153 "349b9741-e67c-4621-8b45-bc290572ea27" 00:17:46.153 ], 00:17:46.153 "product_name": "Malloc disk", 00:17:46.153 "block_size": 512, 00:17:46.153 "num_blocks": 65536, 00:17:46.153 "uuid": "349b9741-e67c-4621-8b45-bc290572ea27", 00:17:46.153 "assigned_rate_limits": { 00:17:46.153 "rw_ios_per_sec": 0, 00:17:46.153 "rw_mbytes_per_sec": 0, 00:17:46.153 "r_mbytes_per_sec": 0, 00:17:46.153 "w_mbytes_per_sec": 0 00:17:46.153 }, 00:17:46.153 "claimed": true, 00:17:46.153 "claim_type": "exclusive_write", 00:17:46.153 "zoned": false, 00:17:46.153 "supported_io_types": { 00:17:46.153 "read": true, 00:17:46.153 "write": true, 00:17:46.153 "unmap": true, 00:17:46.153 "flush": true, 00:17:46.153 "reset": true, 00:17:46.153 "nvme_admin": false, 00:17:46.153 "nvme_io": false, 00:17:46.153 "nvme_io_md": false, 00:17:46.153 "write_zeroes": true, 00:17:46.153 "zcopy": true, 00:17:46.153 "get_zone_info": false, 00:17:46.153 "zone_management": false, 00:17:46.153 "zone_append": false, 00:17:46.153 "compare": false, 00:17:46.153 "compare_and_write": false, 00:17:46.153 "abort": true, 00:17:46.153 "seek_hole": false, 00:17:46.153 "seek_data": false, 00:17:46.153 "copy": true, 00:17:46.153 "nvme_iov_md": false 00:17:46.153 }, 00:17:46.153 "memory_domains": [ 00:17:46.153 { 00:17:46.153 "dma_device_id": "system", 00:17:46.153 "dma_device_type": 1 00:17:46.153 }, 00:17:46.153 { 00:17:46.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.153 "dma_device_type": 2 00:17:46.153 } 00:17:46.153 ], 00:17:46.154 "driver_specific": {} 00:17:46.154 } 00:17:46.154 ] 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.154 "name": "Existed_Raid", 00:17:46.154 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:46.154 "strip_size_kb": 64, 00:17:46.154 "state": "online", 00:17:46.154 "raid_level": "concat", 00:17:46.154 "superblock": true, 00:17:46.154 "num_base_bdevs": 4, 00:17:46.154 "num_base_bdevs_discovered": 4, 00:17:46.154 "num_base_bdevs_operational": 4, 00:17:46.154 "base_bdevs_list": [ 00:17:46.154 { 00:17:46.154 "name": "BaseBdev1", 00:17:46.154 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:46.154 "is_configured": true, 00:17:46.154 "data_offset": 2048, 00:17:46.154 "data_size": 63488 00:17:46.154 }, 00:17:46.154 { 00:17:46.154 "name": "BaseBdev2", 00:17:46.154 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:46.154 "is_configured": true, 00:17:46.154 "data_offset": 2048, 00:17:46.154 "data_size": 63488 00:17:46.154 }, 00:17:46.154 { 00:17:46.154 "name": "BaseBdev3", 00:17:46.154 "uuid": "0b01a665-ce94-4f50-b823-280bf8c64d53", 00:17:46.154 "is_configured": true, 00:17:46.154 "data_offset": 2048, 00:17:46.154 "data_size": 63488 00:17:46.154 }, 00:17:46.154 { 00:17:46.154 "name": "BaseBdev4", 00:17:46.154 "uuid": "349b9741-e67c-4621-8b45-bc290572ea27", 00:17:46.154 "is_configured": true, 00:17:46.154 "data_offset": 2048, 00:17:46.154 "data_size": 63488 00:17:46.154 } 00:17:46.154 ] 00:17:46.154 }' 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.154 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.720 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.720 [2024-12-06 13:11:53.014123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.720 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.720 "name": "Existed_Raid", 00:17:46.720 "aliases": [ 00:17:46.720 "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2" 00:17:46.720 ], 00:17:46.720 "product_name": "Raid Volume", 00:17:46.720 "block_size": 512, 00:17:46.720 "num_blocks": 253952, 00:17:46.720 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:46.720 "assigned_rate_limits": { 00:17:46.720 "rw_ios_per_sec": 0, 00:17:46.720 "rw_mbytes_per_sec": 0, 00:17:46.720 "r_mbytes_per_sec": 0, 00:17:46.720 "w_mbytes_per_sec": 0 00:17:46.720 }, 00:17:46.720 "claimed": false, 00:17:46.720 "zoned": false, 00:17:46.720 "supported_io_types": { 00:17:46.720 "read": true, 00:17:46.720 "write": true, 00:17:46.720 "unmap": true, 00:17:46.720 "flush": true, 00:17:46.720 "reset": true, 00:17:46.720 "nvme_admin": false, 00:17:46.720 "nvme_io": false, 00:17:46.720 "nvme_io_md": false, 00:17:46.720 "write_zeroes": true, 00:17:46.721 "zcopy": false, 00:17:46.721 "get_zone_info": false, 00:17:46.721 "zone_management": false, 00:17:46.721 "zone_append": false, 00:17:46.721 "compare": false, 00:17:46.721 "compare_and_write": false, 00:17:46.721 "abort": false, 00:17:46.721 "seek_hole": false, 00:17:46.721 "seek_data": false, 00:17:46.721 "copy": false, 00:17:46.721 "nvme_iov_md": false 00:17:46.721 }, 00:17:46.721 "memory_domains": [ 00:17:46.721 { 00:17:46.721 "dma_device_id": "system", 00:17:46.721 "dma_device_type": 1 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.721 "dma_device_type": 2 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "system", 00:17:46.721 "dma_device_type": 1 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.721 "dma_device_type": 2 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "system", 00:17:46.721 "dma_device_type": 1 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.721 "dma_device_type": 2 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "system", 00:17:46.721 "dma_device_type": 1 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.721 "dma_device_type": 2 00:17:46.721 } 00:17:46.721 ], 00:17:46.721 "driver_specific": { 00:17:46.721 "raid": { 00:17:46.721 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:46.721 "strip_size_kb": 64, 00:17:46.721 "state": "online", 00:17:46.721 "raid_level": "concat", 00:17:46.721 "superblock": true, 00:17:46.721 "num_base_bdevs": 4, 00:17:46.721 "num_base_bdevs_discovered": 4, 00:17:46.721 "num_base_bdevs_operational": 4, 00:17:46.721 "base_bdevs_list": [ 00:17:46.721 { 00:17:46.721 "name": "BaseBdev1", 00:17:46.721 "uuid": "dbb5cb23-d969-4366-8821-ceda22e53bef", 00:17:46.721 "is_configured": true, 00:17:46.721 "data_offset": 2048, 00:17:46.721 "data_size": 63488 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "name": "BaseBdev2", 00:17:46.721 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:46.721 "is_configured": true, 00:17:46.721 "data_offset": 2048, 00:17:46.721 "data_size": 63488 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "name": "BaseBdev3", 00:17:46.721 "uuid": "0b01a665-ce94-4f50-b823-280bf8c64d53", 00:17:46.721 "is_configured": true, 00:17:46.721 "data_offset": 2048, 00:17:46.721 "data_size": 63488 00:17:46.721 }, 00:17:46.721 { 00:17:46.721 "name": "BaseBdev4", 00:17:46.721 "uuid": "349b9741-e67c-4621-8b45-bc290572ea27", 00:17:46.721 "is_configured": true, 00:17:46.721 "data_offset": 2048, 00:17:46.721 "data_size": 63488 00:17:46.721 } 00:17:46.721 ] 00:17:46.721 } 00:17:46.721 } 00:17:46.721 }' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:46.721 BaseBdev2 00:17:46.721 BaseBdev3 00:17:46.721 BaseBdev4' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.721 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.979 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.980 [2024-12-06 13:11:53.401982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.980 [2024-12-06 13:11:53.402033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.980 [2024-12-06 13:11:53.402114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.980 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.239 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.239 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.239 "name": "Existed_Raid", 00:17:47.239 "uuid": "7b50d3b3-97e3-4ce0-a123-fca6a93ce8d2", 00:17:47.239 "strip_size_kb": 64, 00:17:47.239 "state": "offline", 00:17:47.239 "raid_level": "concat", 00:17:47.239 "superblock": true, 00:17:47.239 "num_base_bdevs": 4, 00:17:47.239 "num_base_bdevs_discovered": 3, 00:17:47.239 "num_base_bdevs_operational": 3, 00:17:47.239 "base_bdevs_list": [ 00:17:47.239 { 00:17:47.239 "name": null, 00:17:47.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.239 "is_configured": false, 00:17:47.239 "data_offset": 0, 00:17:47.239 "data_size": 63488 00:17:47.239 }, 00:17:47.239 { 00:17:47.239 "name": "BaseBdev2", 00:17:47.239 "uuid": "585416e2-2c44-44c9-b462-2fa543267219", 00:17:47.239 "is_configured": true, 00:17:47.239 "data_offset": 2048, 00:17:47.239 "data_size": 63488 00:17:47.239 }, 00:17:47.239 { 00:17:47.239 "name": "BaseBdev3", 00:17:47.239 "uuid": "0b01a665-ce94-4f50-b823-280bf8c64d53", 00:17:47.239 "is_configured": true, 00:17:47.239 "data_offset": 2048, 00:17:47.239 "data_size": 63488 00:17:47.239 }, 00:17:47.239 { 00:17:47.239 "name": "BaseBdev4", 00:17:47.239 "uuid": "349b9741-e67c-4621-8b45-bc290572ea27", 00:17:47.239 "is_configured": true, 00:17:47.239 "data_offset": 2048, 00:17:47.239 "data_size": 63488 00:17:47.239 } 00:17:47.239 ] 00:17:47.239 }' 00:17:47.239 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.239 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.497 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.497 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.755 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.756 [2024-12-06 13:11:54.075403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.756 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.756 [2024-12-06 13:11:54.226327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.014 [2024-12-06 13:11:54.378305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:48.014 [2024-12-06 13:11:54.378376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.014 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 BaseBdev2 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 [ 00:17:48.273 { 00:17:48.273 "name": "BaseBdev2", 00:17:48.273 "aliases": [ 00:17:48.273 "aea38ecd-f918-424e-97e7-77591e3fe7e4" 00:17:48.273 ], 00:17:48.273 "product_name": "Malloc disk", 00:17:48.273 "block_size": 512, 00:17:48.273 "num_blocks": 65536, 00:17:48.273 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:48.273 "assigned_rate_limits": { 00:17:48.273 "rw_ios_per_sec": 0, 00:17:48.273 "rw_mbytes_per_sec": 0, 00:17:48.273 "r_mbytes_per_sec": 0, 00:17:48.273 "w_mbytes_per_sec": 0 00:17:48.273 }, 00:17:48.273 "claimed": false, 00:17:48.273 "zoned": false, 00:17:48.273 "supported_io_types": { 00:17:48.273 "read": true, 00:17:48.273 "write": true, 00:17:48.273 "unmap": true, 00:17:48.273 "flush": true, 00:17:48.273 "reset": true, 00:17:48.273 "nvme_admin": false, 00:17:48.273 "nvme_io": false, 00:17:48.273 "nvme_io_md": false, 00:17:48.273 "write_zeroes": true, 00:17:48.273 "zcopy": true, 00:17:48.273 "get_zone_info": false, 00:17:48.273 "zone_management": false, 00:17:48.273 "zone_append": false, 00:17:48.273 "compare": false, 00:17:48.273 "compare_and_write": false, 00:17:48.273 "abort": true, 00:17:48.273 "seek_hole": false, 00:17:48.273 "seek_data": false, 00:17:48.273 "copy": true, 00:17:48.273 "nvme_iov_md": false 00:17:48.273 }, 00:17:48.273 "memory_domains": [ 00:17:48.273 { 00:17:48.273 "dma_device_id": "system", 00:17:48.273 "dma_device_type": 1 00:17:48.273 }, 00:17:48.273 { 00:17:48.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.273 "dma_device_type": 2 00:17:48.273 } 00:17:48.273 ], 00:17:48.273 "driver_specific": {} 00:17:48.273 } 00:17:48.273 ] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 BaseBdev3 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.273 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 [ 00:17:48.274 { 00:17:48.274 "name": "BaseBdev3", 00:17:48.274 "aliases": [ 00:17:48.274 "f3606bdd-334d-47c1-be30-84c368a97377" 00:17:48.274 ], 00:17:48.274 "product_name": "Malloc disk", 00:17:48.274 "block_size": 512, 00:17:48.274 "num_blocks": 65536, 00:17:48.274 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:48.274 "assigned_rate_limits": { 00:17:48.274 "rw_ios_per_sec": 0, 00:17:48.274 "rw_mbytes_per_sec": 0, 00:17:48.274 "r_mbytes_per_sec": 0, 00:17:48.274 "w_mbytes_per_sec": 0 00:17:48.274 }, 00:17:48.274 "claimed": false, 00:17:48.274 "zoned": false, 00:17:48.274 "supported_io_types": { 00:17:48.274 "read": true, 00:17:48.274 "write": true, 00:17:48.274 "unmap": true, 00:17:48.274 "flush": true, 00:17:48.274 "reset": true, 00:17:48.274 "nvme_admin": false, 00:17:48.274 "nvme_io": false, 00:17:48.274 "nvme_io_md": false, 00:17:48.274 "write_zeroes": true, 00:17:48.274 "zcopy": true, 00:17:48.274 "get_zone_info": false, 00:17:48.274 "zone_management": false, 00:17:48.274 "zone_append": false, 00:17:48.274 "compare": false, 00:17:48.274 "compare_and_write": false, 00:17:48.274 "abort": true, 00:17:48.274 "seek_hole": false, 00:17:48.274 "seek_data": false, 00:17:48.274 "copy": true, 00:17:48.274 "nvme_iov_md": false 00:17:48.274 }, 00:17:48.274 "memory_domains": [ 00:17:48.274 { 00:17:48.274 "dma_device_id": "system", 00:17:48.274 "dma_device_type": 1 00:17:48.274 }, 00:17:48.274 { 00:17:48.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.274 "dma_device_type": 2 00:17:48.274 } 00:17:48.274 ], 00:17:48.274 "driver_specific": {} 00:17:48.274 } 00:17:48.274 ] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 BaseBdev4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 [ 00:17:48.274 { 00:17:48.274 "name": "BaseBdev4", 00:17:48.274 "aliases": [ 00:17:48.274 "1bce90f7-9c76-49ea-8b2c-143f2ffff434" 00:17:48.274 ], 00:17:48.274 "product_name": "Malloc disk", 00:17:48.274 "block_size": 512, 00:17:48.274 "num_blocks": 65536, 00:17:48.274 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:48.274 "assigned_rate_limits": { 00:17:48.274 "rw_ios_per_sec": 0, 00:17:48.274 "rw_mbytes_per_sec": 0, 00:17:48.274 "r_mbytes_per_sec": 0, 00:17:48.274 "w_mbytes_per_sec": 0 00:17:48.274 }, 00:17:48.274 "claimed": false, 00:17:48.274 "zoned": false, 00:17:48.274 "supported_io_types": { 00:17:48.274 "read": true, 00:17:48.274 "write": true, 00:17:48.274 "unmap": true, 00:17:48.274 "flush": true, 00:17:48.274 "reset": true, 00:17:48.274 "nvme_admin": false, 00:17:48.274 "nvme_io": false, 00:17:48.274 "nvme_io_md": false, 00:17:48.274 "write_zeroes": true, 00:17:48.274 "zcopy": true, 00:17:48.274 "get_zone_info": false, 00:17:48.274 "zone_management": false, 00:17:48.274 "zone_append": false, 00:17:48.274 "compare": false, 00:17:48.274 "compare_and_write": false, 00:17:48.274 "abort": true, 00:17:48.274 "seek_hole": false, 00:17:48.274 "seek_data": false, 00:17:48.274 "copy": true, 00:17:48.274 "nvme_iov_md": false 00:17:48.274 }, 00:17:48.274 "memory_domains": [ 00:17:48.274 { 00:17:48.274 "dma_device_id": "system", 00:17:48.274 "dma_device_type": 1 00:17:48.274 }, 00:17:48.274 { 00:17:48.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.274 "dma_device_type": 2 00:17:48.274 } 00:17:48.274 ], 00:17:48.274 "driver_specific": {} 00:17:48.274 } 00:17:48.274 ] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 [2024-12-06 13:11:54.757388] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.274 [2024-12-06 13:11:54.757475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.274 [2024-12-06 13:11:54.757514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.274 [2024-12-06 13:11:54.760272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.274 [2024-12-06 13:11:54.760345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.538 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.538 "name": "Existed_Raid", 00:17:48.538 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:48.538 "strip_size_kb": 64, 00:17:48.538 "state": "configuring", 00:17:48.538 "raid_level": "concat", 00:17:48.538 "superblock": true, 00:17:48.538 "num_base_bdevs": 4, 00:17:48.538 "num_base_bdevs_discovered": 3, 00:17:48.538 "num_base_bdevs_operational": 4, 00:17:48.538 "base_bdevs_list": [ 00:17:48.538 { 00:17:48.538 "name": "BaseBdev1", 00:17:48.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.538 "is_configured": false, 00:17:48.538 "data_offset": 0, 00:17:48.538 "data_size": 0 00:17:48.538 }, 00:17:48.538 { 00:17:48.538 "name": "BaseBdev2", 00:17:48.538 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:48.538 "is_configured": true, 00:17:48.538 "data_offset": 2048, 00:17:48.538 "data_size": 63488 00:17:48.538 }, 00:17:48.538 { 00:17:48.538 "name": "BaseBdev3", 00:17:48.538 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:48.538 "is_configured": true, 00:17:48.538 "data_offset": 2048, 00:17:48.538 "data_size": 63488 00:17:48.538 }, 00:17:48.538 { 00:17:48.538 "name": "BaseBdev4", 00:17:48.538 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:48.538 "is_configured": true, 00:17:48.538 "data_offset": 2048, 00:17:48.538 "data_size": 63488 00:17:48.538 } 00:17:48.538 ] 00:17:48.538 }' 00:17:48.538 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.538 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.102 [2024-12-06 13:11:55.345575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.102 "name": "Existed_Raid", 00:17:49.102 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:49.102 "strip_size_kb": 64, 00:17:49.102 "state": "configuring", 00:17:49.102 "raid_level": "concat", 00:17:49.102 "superblock": true, 00:17:49.102 "num_base_bdevs": 4, 00:17:49.102 "num_base_bdevs_discovered": 2, 00:17:49.102 "num_base_bdevs_operational": 4, 00:17:49.102 "base_bdevs_list": [ 00:17:49.102 { 00:17:49.102 "name": "BaseBdev1", 00:17:49.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.102 "is_configured": false, 00:17:49.102 "data_offset": 0, 00:17:49.102 "data_size": 0 00:17:49.102 }, 00:17:49.102 { 00:17:49.102 "name": null, 00:17:49.102 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:49.102 "is_configured": false, 00:17:49.102 "data_offset": 0, 00:17:49.102 "data_size": 63488 00:17:49.102 }, 00:17:49.102 { 00:17:49.102 "name": "BaseBdev3", 00:17:49.102 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:49.102 "is_configured": true, 00:17:49.102 "data_offset": 2048, 00:17:49.102 "data_size": 63488 00:17:49.102 }, 00:17:49.102 { 00:17:49.102 "name": "BaseBdev4", 00:17:49.102 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:49.102 "is_configured": true, 00:17:49.102 "data_offset": 2048, 00:17:49.102 "data_size": 63488 00:17:49.102 } 00:17:49.102 ] 00:17:49.102 }' 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.102 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.360 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.360 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.618 [2024-12-06 13:11:55.979296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.618 BaseBdev1 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.618 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.618 [ 00:17:49.618 { 00:17:49.618 "name": "BaseBdev1", 00:17:49.618 "aliases": [ 00:17:49.618 "dbd9a60f-5ac7-477b-a522-195ffc15ca92" 00:17:49.618 ], 00:17:49.618 "product_name": "Malloc disk", 00:17:49.618 "block_size": 512, 00:17:49.618 "num_blocks": 65536, 00:17:49.618 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:49.618 "assigned_rate_limits": { 00:17:49.618 "rw_ios_per_sec": 0, 00:17:49.618 "rw_mbytes_per_sec": 0, 00:17:49.618 "r_mbytes_per_sec": 0, 00:17:49.618 "w_mbytes_per_sec": 0 00:17:49.618 }, 00:17:49.618 "claimed": true, 00:17:49.618 "claim_type": "exclusive_write", 00:17:49.618 "zoned": false, 00:17:49.618 "supported_io_types": { 00:17:49.618 "read": true, 00:17:49.618 "write": true, 00:17:49.618 "unmap": true, 00:17:49.618 "flush": true, 00:17:49.618 "reset": true, 00:17:49.618 "nvme_admin": false, 00:17:49.618 "nvme_io": false, 00:17:49.618 "nvme_io_md": false, 00:17:49.618 "write_zeroes": true, 00:17:49.618 "zcopy": true, 00:17:49.618 "get_zone_info": false, 00:17:49.618 "zone_management": false, 00:17:49.618 "zone_append": false, 00:17:49.618 "compare": false, 00:17:49.618 "compare_and_write": false, 00:17:49.618 "abort": true, 00:17:49.618 "seek_hole": false, 00:17:49.618 "seek_data": false, 00:17:49.618 "copy": true, 00:17:49.618 "nvme_iov_md": false 00:17:49.618 }, 00:17:49.618 "memory_domains": [ 00:17:49.618 { 00:17:49.618 "dma_device_id": "system", 00:17:49.618 "dma_device_type": 1 00:17:49.618 }, 00:17:49.618 { 00:17:49.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.618 "dma_device_type": 2 00:17:49.618 } 00:17:49.618 ], 00:17:49.618 "driver_specific": {} 00:17:49.618 } 00:17:49.618 ] 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.618 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.619 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.619 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.619 "name": "Existed_Raid", 00:17:49.619 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:49.619 "strip_size_kb": 64, 00:17:49.619 "state": "configuring", 00:17:49.619 "raid_level": "concat", 00:17:49.619 "superblock": true, 00:17:49.619 "num_base_bdevs": 4, 00:17:49.619 "num_base_bdevs_discovered": 3, 00:17:49.619 "num_base_bdevs_operational": 4, 00:17:49.619 "base_bdevs_list": [ 00:17:49.619 { 00:17:49.619 "name": "BaseBdev1", 00:17:49.619 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:49.619 "is_configured": true, 00:17:49.619 "data_offset": 2048, 00:17:49.619 "data_size": 63488 00:17:49.619 }, 00:17:49.619 { 00:17:49.619 "name": null, 00:17:49.619 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:49.619 "is_configured": false, 00:17:49.619 "data_offset": 0, 00:17:49.619 "data_size": 63488 00:17:49.619 }, 00:17:49.619 { 00:17:49.619 "name": "BaseBdev3", 00:17:49.619 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:49.619 "is_configured": true, 00:17:49.619 "data_offset": 2048, 00:17:49.619 "data_size": 63488 00:17:49.619 }, 00:17:49.619 { 00:17:49.619 "name": "BaseBdev4", 00:17:49.619 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:49.619 "is_configured": true, 00:17:49.619 "data_offset": 2048, 00:17:49.619 "data_size": 63488 00:17:49.619 } 00:17:49.619 ] 00:17:49.619 }' 00:17:49.619 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.619 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 [2024-12-06 13:11:56.563622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.184 "name": "Existed_Raid", 00:17:50.184 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:50.184 "strip_size_kb": 64, 00:17:50.184 "state": "configuring", 00:17:50.184 "raid_level": "concat", 00:17:50.184 "superblock": true, 00:17:50.184 "num_base_bdevs": 4, 00:17:50.184 "num_base_bdevs_discovered": 2, 00:17:50.184 "num_base_bdevs_operational": 4, 00:17:50.184 "base_bdevs_list": [ 00:17:50.184 { 00:17:50.184 "name": "BaseBdev1", 00:17:50.184 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:50.184 "is_configured": true, 00:17:50.184 "data_offset": 2048, 00:17:50.184 "data_size": 63488 00:17:50.184 }, 00:17:50.184 { 00:17:50.184 "name": null, 00:17:50.184 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:50.184 "is_configured": false, 00:17:50.184 "data_offset": 0, 00:17:50.184 "data_size": 63488 00:17:50.184 }, 00:17:50.184 { 00:17:50.184 "name": null, 00:17:50.184 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:50.184 "is_configured": false, 00:17:50.184 "data_offset": 0, 00:17:50.184 "data_size": 63488 00:17:50.184 }, 00:17:50.184 { 00:17:50.184 "name": "BaseBdev4", 00:17:50.184 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:50.184 "is_configured": true, 00:17:50.184 "data_offset": 2048, 00:17:50.184 "data_size": 63488 00:17:50.184 } 00:17:50.184 ] 00:17:50.184 }' 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.184 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.749 [2024-12-06 13:11:57.159796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.749 "name": "Existed_Raid", 00:17:50.749 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:50.749 "strip_size_kb": 64, 00:17:50.749 "state": "configuring", 00:17:50.749 "raid_level": "concat", 00:17:50.749 "superblock": true, 00:17:50.749 "num_base_bdevs": 4, 00:17:50.749 "num_base_bdevs_discovered": 3, 00:17:50.749 "num_base_bdevs_operational": 4, 00:17:50.749 "base_bdevs_list": [ 00:17:50.749 { 00:17:50.749 "name": "BaseBdev1", 00:17:50.749 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:50.749 "is_configured": true, 00:17:50.749 "data_offset": 2048, 00:17:50.749 "data_size": 63488 00:17:50.749 }, 00:17:50.749 { 00:17:50.749 "name": null, 00:17:50.749 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:50.749 "is_configured": false, 00:17:50.749 "data_offset": 0, 00:17:50.749 "data_size": 63488 00:17:50.749 }, 00:17:50.749 { 00:17:50.749 "name": "BaseBdev3", 00:17:50.749 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:50.749 "is_configured": true, 00:17:50.749 "data_offset": 2048, 00:17:50.749 "data_size": 63488 00:17:50.749 }, 00:17:50.749 { 00:17:50.749 "name": "BaseBdev4", 00:17:50.749 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:50.749 "is_configured": true, 00:17:50.749 "data_offset": 2048, 00:17:50.749 "data_size": 63488 00:17:50.749 } 00:17:50.749 ] 00:17:50.749 }' 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.749 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.314 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.314 [2024-12-06 13:11:57.776120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.572 "name": "Existed_Raid", 00:17:51.572 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:51.572 "strip_size_kb": 64, 00:17:51.572 "state": "configuring", 00:17:51.572 "raid_level": "concat", 00:17:51.572 "superblock": true, 00:17:51.572 "num_base_bdevs": 4, 00:17:51.572 "num_base_bdevs_discovered": 2, 00:17:51.572 "num_base_bdevs_operational": 4, 00:17:51.572 "base_bdevs_list": [ 00:17:51.572 { 00:17:51.572 "name": null, 00:17:51.572 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:51.572 "is_configured": false, 00:17:51.572 "data_offset": 0, 00:17:51.572 "data_size": 63488 00:17:51.572 }, 00:17:51.572 { 00:17:51.572 "name": null, 00:17:51.572 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:51.572 "is_configured": false, 00:17:51.572 "data_offset": 0, 00:17:51.572 "data_size": 63488 00:17:51.572 }, 00:17:51.572 { 00:17:51.572 "name": "BaseBdev3", 00:17:51.572 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:51.572 "is_configured": true, 00:17:51.572 "data_offset": 2048, 00:17:51.572 "data_size": 63488 00:17:51.572 }, 00:17:51.572 { 00:17:51.572 "name": "BaseBdev4", 00:17:51.572 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:51.572 "is_configured": true, 00:17:51.572 "data_offset": 2048, 00:17:51.572 "data_size": 63488 00:17:51.572 } 00:17:51.572 ] 00:17:51.572 }' 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.572 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.138 [2024-12-06 13:11:58.463938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.138 "name": "Existed_Raid", 00:17:52.138 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:52.138 "strip_size_kb": 64, 00:17:52.138 "state": "configuring", 00:17:52.138 "raid_level": "concat", 00:17:52.138 "superblock": true, 00:17:52.138 "num_base_bdevs": 4, 00:17:52.138 "num_base_bdevs_discovered": 3, 00:17:52.138 "num_base_bdevs_operational": 4, 00:17:52.138 "base_bdevs_list": [ 00:17:52.138 { 00:17:52.138 "name": null, 00:17:52.138 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:52.138 "is_configured": false, 00:17:52.138 "data_offset": 0, 00:17:52.138 "data_size": 63488 00:17:52.138 }, 00:17:52.138 { 00:17:52.138 "name": "BaseBdev2", 00:17:52.138 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:52.138 "is_configured": true, 00:17:52.138 "data_offset": 2048, 00:17:52.138 "data_size": 63488 00:17:52.138 }, 00:17:52.138 { 00:17:52.138 "name": "BaseBdev3", 00:17:52.138 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:52.138 "is_configured": true, 00:17:52.138 "data_offset": 2048, 00:17:52.138 "data_size": 63488 00:17:52.138 }, 00:17:52.138 { 00:17:52.138 "name": "BaseBdev4", 00:17:52.138 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:52.138 "is_configured": true, 00:17:52.138 "data_offset": 2048, 00:17:52.138 "data_size": 63488 00:17:52.138 } 00:17:52.138 ] 00:17:52.138 }' 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.138 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.704 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dbd9a60f-5ac7-477b-a522-195ffc15ca92 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 [2024-12-06 13:11:59.150823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:52.704 [2024-12-06 13:11:59.151186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.704 [2024-12-06 13:11:59.151206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:52.704 [2024-12-06 13:11:59.151569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:52.704 NewBaseBdev 00:17:52.704 [2024-12-06 13:11:59.151762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.704 [2024-12-06 13:11:59.151783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:52.704 [2024-12-06 13:11:59.151959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.705 [ 00:17:52.705 { 00:17:52.705 "name": "NewBaseBdev", 00:17:52.705 "aliases": [ 00:17:52.705 "dbd9a60f-5ac7-477b-a522-195ffc15ca92" 00:17:52.705 ], 00:17:52.705 "product_name": "Malloc disk", 00:17:52.705 "block_size": 512, 00:17:52.705 "num_blocks": 65536, 00:17:52.705 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:52.705 "assigned_rate_limits": { 00:17:52.705 "rw_ios_per_sec": 0, 00:17:52.705 "rw_mbytes_per_sec": 0, 00:17:52.705 "r_mbytes_per_sec": 0, 00:17:52.705 "w_mbytes_per_sec": 0 00:17:52.705 }, 00:17:52.705 "claimed": true, 00:17:52.705 "claim_type": "exclusive_write", 00:17:52.705 "zoned": false, 00:17:52.705 "supported_io_types": { 00:17:52.705 "read": true, 00:17:52.705 "write": true, 00:17:52.705 "unmap": true, 00:17:52.705 "flush": true, 00:17:52.705 "reset": true, 00:17:52.705 "nvme_admin": false, 00:17:52.705 "nvme_io": false, 00:17:52.705 "nvme_io_md": false, 00:17:52.705 "write_zeroes": true, 00:17:52.705 "zcopy": true, 00:17:52.705 "get_zone_info": false, 00:17:52.705 "zone_management": false, 00:17:52.705 "zone_append": false, 00:17:52.705 "compare": false, 00:17:52.705 "compare_and_write": false, 00:17:52.705 "abort": true, 00:17:52.705 "seek_hole": false, 00:17:52.705 "seek_data": false, 00:17:52.705 "copy": true, 00:17:52.705 "nvme_iov_md": false 00:17:52.705 }, 00:17:52.705 "memory_domains": [ 00:17:52.705 { 00:17:52.705 "dma_device_id": "system", 00:17:52.705 "dma_device_type": 1 00:17:52.705 }, 00:17:52.705 { 00:17:52.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.705 "dma_device_type": 2 00:17:52.705 } 00:17:52.705 ], 00:17:52.705 "driver_specific": {} 00:17:52.705 } 00:17:52.705 ] 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.705 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.981 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.981 "name": "Existed_Raid", 00:17:52.981 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:52.981 "strip_size_kb": 64, 00:17:52.981 "state": "online", 00:17:52.981 "raid_level": "concat", 00:17:52.981 "superblock": true, 00:17:52.981 "num_base_bdevs": 4, 00:17:52.981 "num_base_bdevs_discovered": 4, 00:17:52.981 "num_base_bdevs_operational": 4, 00:17:52.981 "base_bdevs_list": [ 00:17:52.981 { 00:17:52.981 "name": "NewBaseBdev", 00:17:52.981 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:52.981 "is_configured": true, 00:17:52.981 "data_offset": 2048, 00:17:52.981 "data_size": 63488 00:17:52.981 }, 00:17:52.981 { 00:17:52.981 "name": "BaseBdev2", 00:17:52.981 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:52.981 "is_configured": true, 00:17:52.981 "data_offset": 2048, 00:17:52.981 "data_size": 63488 00:17:52.981 }, 00:17:52.981 { 00:17:52.981 "name": "BaseBdev3", 00:17:52.981 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:52.981 "is_configured": true, 00:17:52.981 "data_offset": 2048, 00:17:52.981 "data_size": 63488 00:17:52.981 }, 00:17:52.981 { 00:17:52.981 "name": "BaseBdev4", 00:17:52.981 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:52.981 "is_configured": true, 00:17:52.981 "data_offset": 2048, 00:17:52.981 "data_size": 63488 00:17:52.981 } 00:17:52.981 ] 00:17:52.981 }' 00:17:52.981 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.981 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.241 [2024-12-06 13:11:59.743592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.241 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.500 "name": "Existed_Raid", 00:17:53.500 "aliases": [ 00:17:53.500 "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5" 00:17:53.500 ], 00:17:53.500 "product_name": "Raid Volume", 00:17:53.500 "block_size": 512, 00:17:53.500 "num_blocks": 253952, 00:17:53.500 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:53.500 "assigned_rate_limits": { 00:17:53.500 "rw_ios_per_sec": 0, 00:17:53.500 "rw_mbytes_per_sec": 0, 00:17:53.500 "r_mbytes_per_sec": 0, 00:17:53.500 "w_mbytes_per_sec": 0 00:17:53.500 }, 00:17:53.500 "claimed": false, 00:17:53.500 "zoned": false, 00:17:53.500 "supported_io_types": { 00:17:53.500 "read": true, 00:17:53.500 "write": true, 00:17:53.500 "unmap": true, 00:17:53.500 "flush": true, 00:17:53.500 "reset": true, 00:17:53.500 "nvme_admin": false, 00:17:53.500 "nvme_io": false, 00:17:53.500 "nvme_io_md": false, 00:17:53.500 "write_zeroes": true, 00:17:53.500 "zcopy": false, 00:17:53.500 "get_zone_info": false, 00:17:53.500 "zone_management": false, 00:17:53.500 "zone_append": false, 00:17:53.500 "compare": false, 00:17:53.500 "compare_and_write": false, 00:17:53.500 "abort": false, 00:17:53.500 "seek_hole": false, 00:17:53.500 "seek_data": false, 00:17:53.500 "copy": false, 00:17:53.500 "nvme_iov_md": false 00:17:53.500 }, 00:17:53.500 "memory_domains": [ 00:17:53.500 { 00:17:53.500 "dma_device_id": "system", 00:17:53.500 "dma_device_type": 1 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.500 "dma_device_type": 2 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "system", 00:17:53.500 "dma_device_type": 1 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.500 "dma_device_type": 2 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "system", 00:17:53.500 "dma_device_type": 1 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.500 "dma_device_type": 2 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "system", 00:17:53.500 "dma_device_type": 1 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.500 "dma_device_type": 2 00:17:53.500 } 00:17:53.500 ], 00:17:53.500 "driver_specific": { 00:17:53.500 "raid": { 00:17:53.500 "uuid": "425a7c72-0208-4e3c-a9ef-609a9ed1b7a5", 00:17:53.500 "strip_size_kb": 64, 00:17:53.500 "state": "online", 00:17:53.500 "raid_level": "concat", 00:17:53.500 "superblock": true, 00:17:53.500 "num_base_bdevs": 4, 00:17:53.500 "num_base_bdevs_discovered": 4, 00:17:53.500 "num_base_bdevs_operational": 4, 00:17:53.500 "base_bdevs_list": [ 00:17:53.500 { 00:17:53.500 "name": "NewBaseBdev", 00:17:53.500 "uuid": "dbd9a60f-5ac7-477b-a522-195ffc15ca92", 00:17:53.500 "is_configured": true, 00:17:53.500 "data_offset": 2048, 00:17:53.500 "data_size": 63488 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "name": "BaseBdev2", 00:17:53.500 "uuid": "aea38ecd-f918-424e-97e7-77591e3fe7e4", 00:17:53.500 "is_configured": true, 00:17:53.500 "data_offset": 2048, 00:17:53.500 "data_size": 63488 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "name": "BaseBdev3", 00:17:53.500 "uuid": "f3606bdd-334d-47c1-be30-84c368a97377", 00:17:53.500 "is_configured": true, 00:17:53.500 "data_offset": 2048, 00:17:53.500 "data_size": 63488 00:17:53.500 }, 00:17:53.500 { 00:17:53.500 "name": "BaseBdev4", 00:17:53.500 "uuid": "1bce90f7-9c76-49ea-8b2c-143f2ffff434", 00:17:53.500 "is_configured": true, 00:17:53.500 "data_offset": 2048, 00:17:53.500 "data_size": 63488 00:17:53.500 } 00:17:53.500 ] 00:17:53.500 } 00:17:53.500 } 00:17:53.500 }' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:53.500 BaseBdev2 00:17:53.500 BaseBdev3 00:17:53.500 BaseBdev4' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.500 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.500 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.500 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:53.500 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.501 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.501 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.501 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.759 [2024-12-06 13:12:00.107186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.759 [2024-12-06 13:12:00.107248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.759 [2024-12-06 13:12:00.107370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.759 [2024-12-06 13:12:00.107525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.759 [2024-12-06 13:12:00.107545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72355 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72355 ']' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72355 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72355 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.759 killing process with pid 72355 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72355' 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72355 00:17:53.759 [2024-12-06 13:12:00.148732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.759 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72355 00:17:54.018 [2024-12-06 13:12:00.528193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.393 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.393 00:17:55.393 real 0m13.065s 00:17:55.393 user 0m21.424s 00:17:55.393 sys 0m1.989s 00:17:55.393 13:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.393 13:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.393 ************************************ 00:17:55.393 END TEST raid_state_function_test_sb 00:17:55.393 ************************************ 00:17:55.393 13:12:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:55.393 13:12:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:55.393 13:12:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.393 13:12:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.393 ************************************ 00:17:55.393 START TEST raid_superblock_test 00:17:55.393 ************************************ 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73041 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73041 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73041 ']' 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.393 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.393 [2024-12-06 13:12:01.820902] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:17:55.393 [2024-12-06 13:12:01.821057] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73041 ] 00:17:55.652 [2024-12-06 13:12:01.996172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.652 [2024-12-06 13:12:02.140706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.910 [2024-12-06 13:12:02.366630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.910 [2024-12-06 13:12:02.366739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 malloc1 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 [2024-12-06 13:12:02.857235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.478 [2024-12-06 13:12:02.857321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.478 [2024-12-06 13:12:02.857362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.478 [2024-12-06 13:12:02.857379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.478 [2024-12-06 13:12:02.861997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.478 [2024-12-06 13:12:02.862054] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.478 pt1 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 malloc2 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 [2024-12-06 13:12:02.918089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.478 [2024-12-06 13:12:02.918167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.478 [2024-12-06 13:12:02.918205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.478 [2024-12-06 13:12:02.918242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.478 [2024-12-06 13:12:02.921389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.478 [2024-12-06 13:12:02.921434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.478 pt2 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 malloc3 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 [2024-12-06 13:12:02.991681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.478 [2024-12-06 13:12:02.991778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.478 [2024-12-06 13:12:02.991818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:56.478 [2024-12-06 13:12:02.991835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.478 [2024-12-06 13:12:02.995156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.478 [2024-12-06 13:12:02.995208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.478 pt3 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.478 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.736 malloc4 00:17:56.736 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.737 [2024-12-06 13:12:03.051993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.737 [2024-12-06 13:12:03.052089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.737 [2024-12-06 13:12:03.052124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:56.737 [2024-12-06 13:12:03.052140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.737 [2024-12-06 13:12:03.055238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.737 [2024-12-06 13:12:03.055284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.737 pt4 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.737 [2024-12-06 13:12:03.064014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.737 [2024-12-06 13:12:03.066795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.737 [2024-12-06 13:12:03.066982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:56.737 [2024-12-06 13:12:03.067086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:56.737 [2024-12-06 13:12:03.067346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.737 [2024-12-06 13:12:03.067374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:56.737 [2024-12-06 13:12:03.067752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.737 [2024-12-06 13:12:03.068000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.737 [2024-12-06 13:12:03.068033] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.737 [2024-12-06 13:12:03.068256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.737 "name": "raid_bdev1", 00:17:56.737 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:56.737 "strip_size_kb": 64, 00:17:56.737 "state": "online", 00:17:56.737 "raid_level": "concat", 00:17:56.737 "superblock": true, 00:17:56.737 "num_base_bdevs": 4, 00:17:56.737 "num_base_bdevs_discovered": 4, 00:17:56.737 "num_base_bdevs_operational": 4, 00:17:56.737 "base_bdevs_list": [ 00:17:56.737 { 00:17:56.737 "name": "pt1", 00:17:56.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.737 "is_configured": true, 00:17:56.737 "data_offset": 2048, 00:17:56.737 "data_size": 63488 00:17:56.737 }, 00:17:56.737 { 00:17:56.737 "name": "pt2", 00:17:56.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.737 "is_configured": true, 00:17:56.737 "data_offset": 2048, 00:17:56.737 "data_size": 63488 00:17:56.737 }, 00:17:56.737 { 00:17:56.737 "name": "pt3", 00:17:56.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.737 "is_configured": true, 00:17:56.737 "data_offset": 2048, 00:17:56.737 "data_size": 63488 00:17:56.737 }, 00:17:56.737 { 00:17:56.737 "name": "pt4", 00:17:56.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.737 "is_configured": true, 00:17:56.737 "data_offset": 2048, 00:17:56.737 "data_size": 63488 00:17:56.737 } 00:17:56.737 ] 00:17:56.737 }' 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.737 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.327 [2024-12-06 13:12:03.588865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.327 "name": "raid_bdev1", 00:17:57.327 "aliases": [ 00:17:57.327 "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1" 00:17:57.327 ], 00:17:57.327 "product_name": "Raid Volume", 00:17:57.327 "block_size": 512, 00:17:57.327 "num_blocks": 253952, 00:17:57.327 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:57.327 "assigned_rate_limits": { 00:17:57.327 "rw_ios_per_sec": 0, 00:17:57.327 "rw_mbytes_per_sec": 0, 00:17:57.327 "r_mbytes_per_sec": 0, 00:17:57.327 "w_mbytes_per_sec": 0 00:17:57.327 }, 00:17:57.327 "claimed": false, 00:17:57.327 "zoned": false, 00:17:57.327 "supported_io_types": { 00:17:57.327 "read": true, 00:17:57.327 "write": true, 00:17:57.327 "unmap": true, 00:17:57.327 "flush": true, 00:17:57.327 "reset": true, 00:17:57.327 "nvme_admin": false, 00:17:57.327 "nvme_io": false, 00:17:57.327 "nvme_io_md": false, 00:17:57.327 "write_zeroes": true, 00:17:57.327 "zcopy": false, 00:17:57.327 "get_zone_info": false, 00:17:57.327 "zone_management": false, 00:17:57.327 "zone_append": false, 00:17:57.327 "compare": false, 00:17:57.327 "compare_and_write": false, 00:17:57.327 "abort": false, 00:17:57.327 "seek_hole": false, 00:17:57.327 "seek_data": false, 00:17:57.327 "copy": false, 00:17:57.327 "nvme_iov_md": false 00:17:57.327 }, 00:17:57.327 "memory_domains": [ 00:17:57.327 { 00:17:57.327 "dma_device_id": "system", 00:17:57.327 "dma_device_type": 1 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.327 "dma_device_type": 2 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "system", 00:17:57.327 "dma_device_type": 1 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.327 "dma_device_type": 2 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "system", 00:17:57.327 "dma_device_type": 1 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.327 "dma_device_type": 2 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "system", 00:17:57.327 "dma_device_type": 1 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.327 "dma_device_type": 2 00:17:57.327 } 00:17:57.327 ], 00:17:57.327 "driver_specific": { 00:17:57.327 "raid": { 00:17:57.327 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:57.327 "strip_size_kb": 64, 00:17:57.327 "state": "online", 00:17:57.327 "raid_level": "concat", 00:17:57.327 "superblock": true, 00:17:57.327 "num_base_bdevs": 4, 00:17:57.327 "num_base_bdevs_discovered": 4, 00:17:57.327 "num_base_bdevs_operational": 4, 00:17:57.327 "base_bdevs_list": [ 00:17:57.327 { 00:17:57.327 "name": "pt1", 00:17:57.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.327 "is_configured": true, 00:17:57.327 "data_offset": 2048, 00:17:57.327 "data_size": 63488 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "name": "pt2", 00:17:57.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.327 "is_configured": true, 00:17:57.327 "data_offset": 2048, 00:17:57.327 "data_size": 63488 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "name": "pt3", 00:17:57.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.327 "is_configured": true, 00:17:57.327 "data_offset": 2048, 00:17:57.327 "data_size": 63488 00:17:57.327 }, 00:17:57.327 { 00:17:57.327 "name": "pt4", 00:17:57.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.327 "is_configured": true, 00:17:57.327 "data_offset": 2048, 00:17:57.327 "data_size": 63488 00:17:57.327 } 00:17:57.327 ] 00:17:57.327 } 00:17:57.327 } 00:17:57.327 }' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.327 pt2 00:17:57.327 pt3 00:17:57.327 pt4' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.327 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.328 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.328 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.328 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.328 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.328 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.586 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 [2024-12-06 13:12:03.972948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.587 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc3874e3-1aa8-4e4c-922c-ecb41032d3b1 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc3874e3-1aa8-4e4c-922c-ecb41032d3b1 ']' 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 [2024-12-06 13:12:04.020574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.587 [2024-12-06 13:12:04.020744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.587 [2024-12-06 13:12:04.020999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.587 [2024-12-06 13:12:04.021222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.587 [2024-12-06 13:12:04.021382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.587 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.845 [2024-12-06 13:12:04.172659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:57.845 [2024-12-06 13:12:04.175545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:57.845 [2024-12-06 13:12:04.175621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:57.845 [2024-12-06 13:12:04.175679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:57.845 [2024-12-06 13:12:04.175766] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:57.845 [2024-12-06 13:12:04.175882] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:57.845 [2024-12-06 13:12:04.175915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:57.845 [2024-12-06 13:12:04.175945] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:57.845 [2024-12-06 13:12:04.175967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.845 [2024-12-06 13:12:04.176000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:57.845 request: 00:17:57.845 { 00:17:57.845 "name": "raid_bdev1", 00:17:57.845 "raid_level": "concat", 00:17:57.845 "base_bdevs": [ 00:17:57.845 "malloc1", 00:17:57.845 "malloc2", 00:17:57.845 "malloc3", 00:17:57.845 "malloc4" 00:17:57.845 ], 00:17:57.845 "strip_size_kb": 64, 00:17:57.845 "superblock": false, 00:17:57.845 "method": "bdev_raid_create", 00:17:57.845 "req_id": 1 00:17:57.845 } 00:17:57.845 Got JSON-RPC error response 00:17:57.845 response: 00:17:57.845 { 00:17:57.845 "code": -17, 00:17:57.845 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:57.845 } 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.845 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.845 [2024-12-06 13:12:04.232620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.846 [2024-12-06 13:12:04.232851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.846 [2024-12-06 13:12:04.233025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.846 [2024-12-06 13:12:04.233161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.846 [2024-12-06 13:12:04.236487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.846 [2024-12-06 13:12:04.236689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.846 [2024-12-06 13:12:04.236917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.846 [2024-12-06 13:12:04.237114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.846 pt1 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.846 "name": "raid_bdev1", 00:17:57.846 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:57.846 "strip_size_kb": 64, 00:17:57.846 "state": "configuring", 00:17:57.846 "raid_level": "concat", 00:17:57.846 "superblock": true, 00:17:57.846 "num_base_bdevs": 4, 00:17:57.846 "num_base_bdevs_discovered": 1, 00:17:57.846 "num_base_bdevs_operational": 4, 00:17:57.846 "base_bdevs_list": [ 00:17:57.846 { 00:17:57.846 "name": "pt1", 00:17:57.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.846 "is_configured": true, 00:17:57.846 "data_offset": 2048, 00:17:57.846 "data_size": 63488 00:17:57.846 }, 00:17:57.846 { 00:17:57.846 "name": null, 00:17:57.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.846 "is_configured": false, 00:17:57.846 "data_offset": 2048, 00:17:57.846 "data_size": 63488 00:17:57.846 }, 00:17:57.846 { 00:17:57.846 "name": null, 00:17:57.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.846 "is_configured": false, 00:17:57.846 "data_offset": 2048, 00:17:57.846 "data_size": 63488 00:17:57.846 }, 00:17:57.846 { 00:17:57.846 "name": null, 00:17:57.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.846 "is_configured": false, 00:17:57.846 "data_offset": 2048, 00:17:57.846 "data_size": 63488 00:17:57.846 } 00:17:57.846 ] 00:17:57.846 }' 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.846 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.413 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:58.413 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.413 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.413 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 [2024-12-06 13:12:04.741298] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.414 [2024-12-06 13:12:04.741408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.414 [2024-12-06 13:12:04.741442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:58.414 [2024-12-06 13:12:04.741478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.414 [2024-12-06 13:12:04.742174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.414 [2024-12-06 13:12:04.742240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.414 [2024-12-06 13:12:04.742366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.414 [2024-12-06 13:12:04.742407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.414 pt2 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 [2024-12-06 13:12:04.749174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.414 "name": "raid_bdev1", 00:17:58.414 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:58.414 "strip_size_kb": 64, 00:17:58.414 "state": "configuring", 00:17:58.414 "raid_level": "concat", 00:17:58.414 "superblock": true, 00:17:58.414 "num_base_bdevs": 4, 00:17:58.414 "num_base_bdevs_discovered": 1, 00:17:58.414 "num_base_bdevs_operational": 4, 00:17:58.414 "base_bdevs_list": [ 00:17:58.414 { 00:17:58.414 "name": "pt1", 00:17:58.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.414 "is_configured": true, 00:17:58.414 "data_offset": 2048, 00:17:58.414 "data_size": 63488 00:17:58.414 }, 00:17:58.414 { 00:17:58.414 "name": null, 00:17:58.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.414 "is_configured": false, 00:17:58.414 "data_offset": 0, 00:17:58.414 "data_size": 63488 00:17:58.414 }, 00:17:58.414 { 00:17:58.414 "name": null, 00:17:58.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.414 "is_configured": false, 00:17:58.414 "data_offset": 2048, 00:17:58.414 "data_size": 63488 00:17:58.414 }, 00:17:58.414 { 00:17:58.414 "name": null, 00:17:58.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.414 "is_configured": false, 00:17:58.414 "data_offset": 2048, 00:17:58.414 "data_size": 63488 00:17:58.414 } 00:17:58.414 ] 00:17:58.414 }' 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.414 13:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 [2024-12-06 13:12:05.261412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.981 [2024-12-06 13:12:05.261549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.981 [2024-12-06 13:12:05.261587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:58.981 [2024-12-06 13:12:05.261603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.981 [2024-12-06 13:12:05.262303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.981 [2024-12-06 13:12:05.262331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.981 [2024-12-06 13:12:05.262450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.981 [2024-12-06 13:12:05.262502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.981 pt2 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 [2024-12-06 13:12:05.269309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.981 [2024-12-06 13:12:05.269399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.981 [2024-12-06 13:12:05.269442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:58.981 [2024-12-06 13:12:05.269472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.981 [2024-12-06 13:12:05.269970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.981 [2024-12-06 13:12:05.270014] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.981 [2024-12-06 13:12:05.270096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:58.981 [2024-12-06 13:12:05.270146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.981 pt3 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 [2024-12-06 13:12:05.281297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.981 [2024-12-06 13:12:05.281395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.981 [2024-12-06 13:12:05.281423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:58.981 [2024-12-06 13:12:05.281437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.981 [2024-12-06 13:12:05.281948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.981 [2024-12-06 13:12:05.281997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.981 [2024-12-06 13:12:05.282082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:58.981 [2024-12-06 13:12:05.282122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.981 [2024-12-06 13:12:05.282320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.981 [2024-12-06 13:12:05.282336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:58.981 [2024-12-06 13:12:05.282692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.981 [2024-12-06 13:12:05.283048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.981 [2024-12-06 13:12:05.283079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.981 [2024-12-06 13:12:05.283272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.981 pt4 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.981 "name": "raid_bdev1", 00:17:58.981 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:58.981 "strip_size_kb": 64, 00:17:58.981 "state": "online", 00:17:58.981 "raid_level": "concat", 00:17:58.981 "superblock": true, 00:17:58.981 "num_base_bdevs": 4, 00:17:58.981 "num_base_bdevs_discovered": 4, 00:17:58.981 "num_base_bdevs_operational": 4, 00:17:58.981 "base_bdevs_list": [ 00:17:58.981 { 00:17:58.981 "name": "pt1", 00:17:58.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.981 "is_configured": true, 00:17:58.981 "data_offset": 2048, 00:17:58.981 "data_size": 63488 00:17:58.981 }, 00:17:58.981 { 00:17:58.981 "name": "pt2", 00:17:58.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.981 "is_configured": true, 00:17:58.981 "data_offset": 2048, 00:17:58.981 "data_size": 63488 00:17:58.981 }, 00:17:58.981 { 00:17:58.981 "name": "pt3", 00:17:58.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.981 "is_configured": true, 00:17:58.981 "data_offset": 2048, 00:17:58.981 "data_size": 63488 00:17:58.981 }, 00:17:58.981 { 00:17:58.981 "name": "pt4", 00:17:58.981 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.981 "is_configured": true, 00:17:58.981 "data_offset": 2048, 00:17:58.981 "data_size": 63488 00:17:58.981 } 00:17:58.981 ] 00:17:58.981 }' 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.981 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 [2024-12-06 13:12:05.814059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.547 "name": "raid_bdev1", 00:17:59.547 "aliases": [ 00:17:59.547 "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1" 00:17:59.547 ], 00:17:59.547 "product_name": "Raid Volume", 00:17:59.547 "block_size": 512, 00:17:59.547 "num_blocks": 253952, 00:17:59.547 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:59.547 "assigned_rate_limits": { 00:17:59.547 "rw_ios_per_sec": 0, 00:17:59.547 "rw_mbytes_per_sec": 0, 00:17:59.547 "r_mbytes_per_sec": 0, 00:17:59.547 "w_mbytes_per_sec": 0 00:17:59.547 }, 00:17:59.547 "claimed": false, 00:17:59.547 "zoned": false, 00:17:59.547 "supported_io_types": { 00:17:59.547 "read": true, 00:17:59.547 "write": true, 00:17:59.547 "unmap": true, 00:17:59.547 "flush": true, 00:17:59.547 "reset": true, 00:17:59.547 "nvme_admin": false, 00:17:59.547 "nvme_io": false, 00:17:59.547 "nvme_io_md": false, 00:17:59.547 "write_zeroes": true, 00:17:59.547 "zcopy": false, 00:17:59.547 "get_zone_info": false, 00:17:59.547 "zone_management": false, 00:17:59.547 "zone_append": false, 00:17:59.547 "compare": false, 00:17:59.547 "compare_and_write": false, 00:17:59.547 "abort": false, 00:17:59.547 "seek_hole": false, 00:17:59.547 "seek_data": false, 00:17:59.547 "copy": false, 00:17:59.547 "nvme_iov_md": false 00:17:59.547 }, 00:17:59.547 "memory_domains": [ 00:17:59.547 { 00:17:59.547 "dma_device_id": "system", 00:17:59.547 "dma_device_type": 1 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.547 "dma_device_type": 2 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "system", 00:17:59.547 "dma_device_type": 1 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.547 "dma_device_type": 2 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "system", 00:17:59.547 "dma_device_type": 1 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.547 "dma_device_type": 2 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "system", 00:17:59.547 "dma_device_type": 1 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.547 "dma_device_type": 2 00:17:59.547 } 00:17:59.547 ], 00:17:59.547 "driver_specific": { 00:17:59.547 "raid": { 00:17:59.547 "uuid": "dc3874e3-1aa8-4e4c-922c-ecb41032d3b1", 00:17:59.547 "strip_size_kb": 64, 00:17:59.547 "state": "online", 00:17:59.547 "raid_level": "concat", 00:17:59.547 "superblock": true, 00:17:59.547 "num_base_bdevs": 4, 00:17:59.547 "num_base_bdevs_discovered": 4, 00:17:59.547 "num_base_bdevs_operational": 4, 00:17:59.547 "base_bdevs_list": [ 00:17:59.547 { 00:17:59.547 "name": "pt1", 00:17:59.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.547 "is_configured": true, 00:17:59.547 "data_offset": 2048, 00:17:59.547 "data_size": 63488 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "name": "pt2", 00:17:59.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.547 "is_configured": true, 00:17:59.547 "data_offset": 2048, 00:17:59.547 "data_size": 63488 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "name": "pt3", 00:17:59.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.547 "is_configured": true, 00:17:59.547 "data_offset": 2048, 00:17:59.547 "data_size": 63488 00:17:59.547 }, 00:17:59.547 { 00:17:59.547 "name": "pt4", 00:17:59.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.547 "is_configured": true, 00:17:59.547 "data_offset": 2048, 00:17:59.547 "data_size": 63488 00:17:59.547 } 00:17:59.547 ] 00:17:59.547 } 00:17:59.547 } 00:17:59.547 }' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.547 pt2 00:17:59.547 pt3 00:17:59.547 pt4' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.547 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.548 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.806 [2024-12-06 13:12:06.178017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc3874e3-1aa8-4e4c-922c-ecb41032d3b1 '!=' dc3874e3-1aa8-4e4c-922c-ecb41032d3b1 ']' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73041 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73041 ']' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73041 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73041 00:17:59.806 killing process with pid 73041 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73041' 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73041 00:17:59.806 [2024-12-06 13:12:06.259605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.806 [2024-12-06 13:12:06.259732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.806 [2024-12-06 13:12:06.259846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.806 13:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73041 00:17:59.806 [2024-12-06 13:12:06.260041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:00.374 [2024-12-06 13:12:06.634605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.326 ************************************ 00:18:01.326 END TEST raid_superblock_test 00:18:01.326 ************************************ 00:18:01.326 13:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:01.326 00:18:01.326 real 0m6.028s 00:18:01.326 user 0m8.957s 00:18:01.326 sys 0m0.917s 00:18:01.326 13:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.326 13:12:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.326 13:12:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:18:01.326 13:12:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:01.326 13:12:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.326 13:12:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.326 ************************************ 00:18:01.326 START TEST raid_read_error_test 00:18:01.326 ************************************ 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OAFVK3Y4Wd 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73307 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73307 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73307 ']' 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.326 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.584 [2024-12-06 13:12:07.944841] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:01.584 [2024-12-06 13:12:07.945360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73307 ] 00:18:01.842 [2024-12-06 13:12:08.133876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.842 [2024-12-06 13:12:08.284453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.101 [2024-12-06 13:12:08.512007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.101 [2024-12-06 13:12:08.512063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 BaseBdev1_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 true 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 [2024-12-06 13:12:09.024582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:02.670 [2024-12-06 13:12:09.024661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.670 [2024-12-06 13:12:09.024694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:02.670 [2024-12-06 13:12:09.024714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.670 [2024-12-06 13:12:09.027732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.670 [2024-12-06 13:12:09.027786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.670 BaseBdev1 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 BaseBdev2_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 true 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 [2024-12-06 13:12:09.096151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:02.670 [2024-12-06 13:12:09.096412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.670 [2024-12-06 13:12:09.096506] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:02.670 [2024-12-06 13:12:09.096534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.670 [2024-12-06 13:12:09.099625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.670 [2024-12-06 13:12:09.099690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.670 BaseBdev2 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 BaseBdev3_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 true 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.670 [2024-12-06 13:12:09.175404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:02.670 [2024-12-06 13:12:09.175674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.670 [2024-12-06 13:12:09.175716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:02.670 [2024-12-06 13:12:09.175738] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.670 [2024-12-06 13:12:09.178949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.670 [2024-12-06 13:12:09.179195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:02.670 BaseBdev3 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:02.670 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.671 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.931 BaseBdev4_malloc 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.931 true 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.931 [2024-12-06 13:12:09.239962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:02.931 [2024-12-06 13:12:09.240042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.931 [2024-12-06 13:12:09.240075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:02.931 [2024-12-06 13:12:09.240094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.931 [2024-12-06 13:12:09.243254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.931 [2024-12-06 13:12:09.243528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:02.931 BaseBdev4 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.931 [2024-12-06 13:12:09.248203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.931 [2024-12-06 13:12:09.250983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.931 [2024-12-06 13:12:09.251098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.931 [2024-12-06 13:12:09.251200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.931 [2024-12-06 13:12:09.251708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:02.931 [2024-12-06 13:12:09.251776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:02.931 [2024-12-06 13:12:09.252202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:02.931 [2024-12-06 13:12:09.252504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:02.931 [2024-12-06 13:12:09.252645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:02.931 [2024-12-06 13:12:09.253095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.931 "name": "raid_bdev1", 00:18:02.931 "uuid": "19e770ea-880b-4416-9d10-686cca64de37", 00:18:02.931 "strip_size_kb": 64, 00:18:02.931 "state": "online", 00:18:02.931 "raid_level": "concat", 00:18:02.931 "superblock": true, 00:18:02.931 "num_base_bdevs": 4, 00:18:02.931 "num_base_bdevs_discovered": 4, 00:18:02.931 "num_base_bdevs_operational": 4, 00:18:02.931 "base_bdevs_list": [ 00:18:02.931 { 00:18:02.931 "name": "BaseBdev1", 00:18:02.931 "uuid": "707eacd4-acf8-5586-9669-f14c86079c7b", 00:18:02.931 "is_configured": true, 00:18:02.931 "data_offset": 2048, 00:18:02.931 "data_size": 63488 00:18:02.931 }, 00:18:02.931 { 00:18:02.931 "name": "BaseBdev2", 00:18:02.931 "uuid": "a19e781f-8bf6-57c5-b8fd-0eecce981157", 00:18:02.931 "is_configured": true, 00:18:02.931 "data_offset": 2048, 00:18:02.931 "data_size": 63488 00:18:02.931 }, 00:18:02.931 { 00:18:02.931 "name": "BaseBdev3", 00:18:02.931 "uuid": "69c7a282-a356-5050-9a42-ef3e1ad09023", 00:18:02.931 "is_configured": true, 00:18:02.931 "data_offset": 2048, 00:18:02.931 "data_size": 63488 00:18:02.931 }, 00:18:02.931 { 00:18:02.931 "name": "BaseBdev4", 00:18:02.931 "uuid": "3d629d17-1932-5fdf-848d-74307915eace", 00:18:02.931 "is_configured": true, 00:18:02.931 "data_offset": 2048, 00:18:02.931 "data_size": 63488 00:18:02.931 } 00:18:02.931 ] 00:18:02.931 }' 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.931 13:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.498 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:03.498 13:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:03.498 [2024-12-06 13:12:09.906901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.433 "name": "raid_bdev1", 00:18:04.433 "uuid": "19e770ea-880b-4416-9d10-686cca64de37", 00:18:04.433 "strip_size_kb": 64, 00:18:04.433 "state": "online", 00:18:04.433 "raid_level": "concat", 00:18:04.433 "superblock": true, 00:18:04.433 "num_base_bdevs": 4, 00:18:04.433 "num_base_bdevs_discovered": 4, 00:18:04.433 "num_base_bdevs_operational": 4, 00:18:04.433 "base_bdevs_list": [ 00:18:04.433 { 00:18:04.433 "name": "BaseBdev1", 00:18:04.433 "uuid": "707eacd4-acf8-5586-9669-f14c86079c7b", 00:18:04.433 "is_configured": true, 00:18:04.433 "data_offset": 2048, 00:18:04.433 "data_size": 63488 00:18:04.433 }, 00:18:04.433 { 00:18:04.433 "name": "BaseBdev2", 00:18:04.433 "uuid": "a19e781f-8bf6-57c5-b8fd-0eecce981157", 00:18:04.433 "is_configured": true, 00:18:04.433 "data_offset": 2048, 00:18:04.433 "data_size": 63488 00:18:04.433 }, 00:18:04.433 { 00:18:04.433 "name": "BaseBdev3", 00:18:04.433 "uuid": "69c7a282-a356-5050-9a42-ef3e1ad09023", 00:18:04.433 "is_configured": true, 00:18:04.433 "data_offset": 2048, 00:18:04.433 "data_size": 63488 00:18:04.433 }, 00:18:04.433 { 00:18:04.433 "name": "BaseBdev4", 00:18:04.433 "uuid": "3d629d17-1932-5fdf-848d-74307915eace", 00:18:04.433 "is_configured": true, 00:18:04.433 "data_offset": 2048, 00:18:04.433 "data_size": 63488 00:18:04.433 } 00:18:04.433 ] 00:18:04.433 }' 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.433 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.999 [2024-12-06 13:12:11.325659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.999 [2024-12-06 13:12:11.325703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.999 [2024-12-06 13:12:11.329435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.999 [2024-12-06 13:12:11.329661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.999 [2024-12-06 13:12:11.329850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.999 [2024-12-06 13:12:11.330029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.999 { 00:18:04.999 "results": [ 00:18:04.999 { 00:18:04.999 "job": "raid_bdev1", 00:18:04.999 "core_mask": "0x1", 00:18:04.999 "workload": "randrw", 00:18:04.999 "percentage": 50, 00:18:04.999 "status": "finished", 00:18:04.999 "queue_depth": 1, 00:18:04.999 "io_size": 131072, 00:18:04.999 "runtime": 1.415875, 00:18:04.999 "iops": 9453.518142491392, 00:18:04.999 "mibps": 1181.689767811424, 00:18:04.999 "io_failed": 1, 00:18:04.999 "io_timeout": 0, 00:18:04.999 "avg_latency_us": 148.59053067655486, 00:18:04.999 "min_latency_us": 39.33090909090909, 00:18:04.999 "max_latency_us": 1876.7127272727273 00:18:04.999 } 00:18:04.999 ], 00:18:04.999 "core_count": 1 00:18:04.999 } 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73307 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73307 ']' 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73307 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73307 00:18:04.999 killing process with pid 73307 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73307' 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73307 00:18:04.999 [2024-12-06 13:12:11.372560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.999 13:12:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73307 00:18:05.257 [2024-12-06 13:12:11.700374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OAFVK3Y4Wd 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:06.631 ************************************ 00:18:06.631 END TEST raid_read_error_test 00:18:06.631 ************************************ 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:18:06.631 00:18:06.631 real 0m5.110s 00:18:06.631 user 0m6.193s 00:18:06.631 sys 0m0.725s 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.631 13:12:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.631 13:12:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:18:06.631 13:12:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:06.631 13:12:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.631 13:12:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.631 ************************************ 00:18:06.631 START TEST raid_write_error_test 00:18:06.631 ************************************ 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d6OPE71gXQ 00:18:06.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73457 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73457 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73457 ']' 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.631 13:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.631 [2024-12-06 13:12:13.092233] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:06.631 [2024-12-06 13:12:13.092389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73457 ] 00:18:06.889 [2024-12-06 13:12:13.270011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.147 [2024-12-06 13:12:13.421720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.147 [2024-12-06 13:12:13.651392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.147 [2024-12-06 13:12:13.651493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.715 BaseBdev1_malloc 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.715 true 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.715 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.715 [2024-12-06 13:12:14.171721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:07.715 [2024-12-06 13:12:14.171819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.715 [2024-12-06 13:12:14.171853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:07.716 [2024-12-06 13:12:14.171873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.716 [2024-12-06 13:12:14.175024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.716 [2024-12-06 13:12:14.175081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.716 BaseBdev1 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.716 BaseBdev2_malloc 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.716 true 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.716 [2024-12-06 13:12:14.231965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:07.716 [2024-12-06 13:12:14.232044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.716 [2024-12-06 13:12:14.232071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:07.716 [2024-12-06 13:12:14.232089] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.716 [2024-12-06 13:12:14.235178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.716 [2024-12-06 13:12:14.235399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.716 BaseBdev2 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.716 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 BaseBdev3_malloc 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 true 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 [2024-12-06 13:12:14.302036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:07.975 [2024-12-06 13:12:14.302118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.975 [2024-12-06 13:12:14.302146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:07.975 [2024-12-06 13:12:14.302165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.975 [2024-12-06 13:12:14.305394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.975 [2024-12-06 13:12:14.305463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:07.975 BaseBdev3 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 BaseBdev4_malloc 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.975 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.975 true 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.976 [2024-12-06 13:12:14.361981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:07.976 [2024-12-06 13:12:14.362060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.976 [2024-12-06 13:12:14.362090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:07.976 [2024-12-06 13:12:14.362109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.976 [2024-12-06 13:12:14.365176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.976 [2024-12-06 13:12:14.365389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:07.976 BaseBdev4 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.976 [2024-12-06 13:12:14.370067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.976 [2024-12-06 13:12:14.372817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.976 [2024-12-06 13:12:14.372926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.976 [2024-12-06 13:12:14.373022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:07.976 [2024-12-06 13:12:14.373318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:07.976 [2024-12-06 13:12:14.373346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:07.976 [2024-12-06 13:12:14.373712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:07.976 [2024-12-06 13:12:14.373945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:07.976 [2024-12-06 13:12:14.373965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:07.976 [2024-12-06 13:12:14.374274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.976 "name": "raid_bdev1", 00:18:07.976 "uuid": "7e55539e-4b37-47f4-ba06-c7e4d706f82e", 00:18:07.976 "strip_size_kb": 64, 00:18:07.976 "state": "online", 00:18:07.976 "raid_level": "concat", 00:18:07.976 "superblock": true, 00:18:07.976 "num_base_bdevs": 4, 00:18:07.976 "num_base_bdevs_discovered": 4, 00:18:07.976 "num_base_bdevs_operational": 4, 00:18:07.976 "base_bdevs_list": [ 00:18:07.976 { 00:18:07.976 "name": "BaseBdev1", 00:18:07.976 "uuid": "9d145c93-0048-5f70-9c9f-572b340856b8", 00:18:07.976 "is_configured": true, 00:18:07.976 "data_offset": 2048, 00:18:07.976 "data_size": 63488 00:18:07.976 }, 00:18:07.976 { 00:18:07.976 "name": "BaseBdev2", 00:18:07.976 "uuid": "7e67301b-57d1-5e48-b0d3-c0445d319936", 00:18:07.976 "is_configured": true, 00:18:07.976 "data_offset": 2048, 00:18:07.976 "data_size": 63488 00:18:07.976 }, 00:18:07.976 { 00:18:07.976 "name": "BaseBdev3", 00:18:07.976 "uuid": "f6e81a22-5fd5-5176-b6c0-25af7e2086f4", 00:18:07.976 "is_configured": true, 00:18:07.976 "data_offset": 2048, 00:18:07.976 "data_size": 63488 00:18:07.976 }, 00:18:07.976 { 00:18:07.976 "name": "BaseBdev4", 00:18:07.976 "uuid": "0f01b4aa-46e8-582b-9a68-798cab8f5267", 00:18:07.976 "is_configured": true, 00:18:07.976 "data_offset": 2048, 00:18:07.976 "data_size": 63488 00:18:07.976 } 00:18:07.976 ] 00:18:07.976 }' 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.976 13:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.617 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:08.617 13:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:08.617 [2024-12-06 13:12:15.016072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.554 "name": "raid_bdev1", 00:18:09.554 "uuid": "7e55539e-4b37-47f4-ba06-c7e4d706f82e", 00:18:09.554 "strip_size_kb": 64, 00:18:09.554 "state": "online", 00:18:09.554 "raid_level": "concat", 00:18:09.554 "superblock": true, 00:18:09.554 "num_base_bdevs": 4, 00:18:09.554 "num_base_bdevs_discovered": 4, 00:18:09.554 "num_base_bdevs_operational": 4, 00:18:09.554 "base_bdevs_list": [ 00:18:09.554 { 00:18:09.554 "name": "BaseBdev1", 00:18:09.554 "uuid": "9d145c93-0048-5f70-9c9f-572b340856b8", 00:18:09.554 "is_configured": true, 00:18:09.554 "data_offset": 2048, 00:18:09.554 "data_size": 63488 00:18:09.554 }, 00:18:09.554 { 00:18:09.554 "name": "BaseBdev2", 00:18:09.554 "uuid": "7e67301b-57d1-5e48-b0d3-c0445d319936", 00:18:09.554 "is_configured": true, 00:18:09.554 "data_offset": 2048, 00:18:09.554 "data_size": 63488 00:18:09.554 }, 00:18:09.554 { 00:18:09.554 "name": "BaseBdev3", 00:18:09.554 "uuid": "f6e81a22-5fd5-5176-b6c0-25af7e2086f4", 00:18:09.554 "is_configured": true, 00:18:09.554 "data_offset": 2048, 00:18:09.554 "data_size": 63488 00:18:09.554 }, 00:18:09.554 { 00:18:09.554 "name": "BaseBdev4", 00:18:09.554 "uuid": "0f01b4aa-46e8-582b-9a68-798cab8f5267", 00:18:09.554 "is_configured": true, 00:18:09.554 "data_offset": 2048, 00:18:09.554 "data_size": 63488 00:18:09.554 } 00:18:09.554 ] 00:18:09.554 }' 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.554 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.121 [2024-12-06 13:12:16.433668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.121 [2024-12-06 13:12:16.433712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.121 { 00:18:10.121 "results": [ 00:18:10.121 { 00:18:10.121 "job": "raid_bdev1", 00:18:10.121 "core_mask": "0x1", 00:18:10.121 "workload": "randrw", 00:18:10.121 "percentage": 50, 00:18:10.121 "status": "finished", 00:18:10.121 "queue_depth": 1, 00:18:10.121 "io_size": 131072, 00:18:10.121 "runtime": 1.415012, 00:18:10.121 "iops": 9402.040406724465, 00:18:10.121 "mibps": 1175.2550508405582, 00:18:10.121 "io_failed": 1, 00:18:10.121 "io_timeout": 0, 00:18:10.121 "avg_latency_us": 149.20081609784427, 00:18:10.121 "min_latency_us": 39.33090909090909, 00:18:10.121 "max_latency_us": 1936.290909090909 00:18:10.121 } 00:18:10.121 ], 00:18:10.121 "core_count": 1 00:18:10.121 } 00:18:10.121 [2024-12-06 13:12:16.437220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.121 [2024-12-06 13:12:16.437302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.121 [2024-12-06 13:12:16.437366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.121 [2024-12-06 13:12:16.437388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73457 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73457 ']' 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73457 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73457 00:18:10.121 killing process with pid 73457 00:18:10.121 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.122 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.122 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73457' 00:18:10.122 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73457 00:18:10.122 13:12:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73457 00:18:10.122 [2024-12-06 13:12:16.473149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.380 [2024-12-06 13:12:16.791964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d6OPE71gXQ 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:18:11.758 00:18:11.758 real 0m5.050s 00:18:11.758 user 0m6.133s 00:18:11.758 sys 0m0.671s 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.758 ************************************ 00:18:11.758 END TEST raid_write_error_test 00:18:11.758 ************************************ 00:18:11.758 13:12:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.758 13:12:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:18:11.758 13:12:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:11.758 13:12:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:11.758 13:12:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.758 13:12:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.758 ************************************ 00:18:11.758 START TEST raid_state_function_test 00:18:11.758 ************************************ 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73603 00:18:11.758 Process raid pid: 73603 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73603' 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73603 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73603 ']' 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.758 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.758 [2024-12-06 13:12:18.215675] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:11.758 [2024-12-06 13:12:18.215921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.017 [2024-12-06 13:12:18.400407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.276 [2024-12-06 13:12:18.556556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.276 [2024-12-06 13:12:18.794950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.276 [2024-12-06 13:12:18.795242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.886 [2024-12-06 13:12:19.265916] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.886 [2024-12-06 13:12:19.266032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.886 [2024-12-06 13:12:19.266050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.886 [2024-12-06 13:12:19.266067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.886 [2024-12-06 13:12:19.266077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.886 [2024-12-06 13:12:19.266092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.886 [2024-12-06 13:12:19.266102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:12.886 [2024-12-06 13:12:19.266116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.886 "name": "Existed_Raid", 00:18:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.886 "strip_size_kb": 0, 00:18:12.886 "state": "configuring", 00:18:12.886 "raid_level": "raid1", 00:18:12.886 "superblock": false, 00:18:12.886 "num_base_bdevs": 4, 00:18:12.886 "num_base_bdevs_discovered": 0, 00:18:12.886 "num_base_bdevs_operational": 4, 00:18:12.886 "base_bdevs_list": [ 00:18:12.886 { 00:18:12.886 "name": "BaseBdev1", 00:18:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.886 "is_configured": false, 00:18:12.886 "data_offset": 0, 00:18:12.886 "data_size": 0 00:18:12.886 }, 00:18:12.886 { 00:18:12.886 "name": "BaseBdev2", 00:18:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.886 "is_configured": false, 00:18:12.886 "data_offset": 0, 00:18:12.886 "data_size": 0 00:18:12.886 }, 00:18:12.886 { 00:18:12.886 "name": "BaseBdev3", 00:18:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.886 "is_configured": false, 00:18:12.886 "data_offset": 0, 00:18:12.886 "data_size": 0 00:18:12.886 }, 00:18:12.886 { 00:18:12.886 "name": "BaseBdev4", 00:18:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.886 "is_configured": false, 00:18:12.886 "data_offset": 0, 00:18:12.886 "data_size": 0 00:18:12.886 } 00:18:12.886 ] 00:18:12.886 }' 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.886 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 [2024-12-06 13:12:19.806047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.455 [2024-12-06 13:12:19.806102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 [2024-12-06 13:12:19.817983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.455 [2024-12-06 13:12:19.818183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.455 [2024-12-06 13:12:19.818342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.455 [2024-12-06 13:12:19.818408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.455 [2024-12-06 13:12:19.818545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.455 [2024-12-06 13:12:19.818608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.455 [2024-12-06 13:12:19.818748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:13.455 [2024-12-06 13:12:19.818795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 [2024-12-06 13:12:19.868189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.455 BaseBdev1 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.455 [ 00:18:13.455 { 00:18:13.455 "name": "BaseBdev1", 00:18:13.455 "aliases": [ 00:18:13.455 "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6" 00:18:13.455 ], 00:18:13.455 "product_name": "Malloc disk", 00:18:13.455 "block_size": 512, 00:18:13.455 "num_blocks": 65536, 00:18:13.455 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:13.455 "assigned_rate_limits": { 00:18:13.455 "rw_ios_per_sec": 0, 00:18:13.455 "rw_mbytes_per_sec": 0, 00:18:13.455 "r_mbytes_per_sec": 0, 00:18:13.455 "w_mbytes_per_sec": 0 00:18:13.455 }, 00:18:13.455 "claimed": true, 00:18:13.455 "claim_type": "exclusive_write", 00:18:13.455 "zoned": false, 00:18:13.455 "supported_io_types": { 00:18:13.455 "read": true, 00:18:13.455 "write": true, 00:18:13.455 "unmap": true, 00:18:13.455 "flush": true, 00:18:13.455 "reset": true, 00:18:13.455 "nvme_admin": false, 00:18:13.455 "nvme_io": false, 00:18:13.455 "nvme_io_md": false, 00:18:13.455 "write_zeroes": true, 00:18:13.455 "zcopy": true, 00:18:13.455 "get_zone_info": false, 00:18:13.455 "zone_management": false, 00:18:13.455 "zone_append": false, 00:18:13.455 "compare": false, 00:18:13.455 "compare_and_write": false, 00:18:13.455 "abort": true, 00:18:13.455 "seek_hole": false, 00:18:13.455 "seek_data": false, 00:18:13.455 "copy": true, 00:18:13.455 "nvme_iov_md": false 00:18:13.455 }, 00:18:13.455 "memory_domains": [ 00:18:13.455 { 00:18:13.455 "dma_device_id": "system", 00:18:13.455 "dma_device_type": 1 00:18:13.455 }, 00:18:13.455 { 00:18:13.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.455 "dma_device_type": 2 00:18:13.455 } 00:18:13.455 ], 00:18:13.455 "driver_specific": {} 00:18:13.455 } 00:18:13.455 ] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.455 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.456 "name": "Existed_Raid", 00:18:13.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.456 "strip_size_kb": 0, 00:18:13.456 "state": "configuring", 00:18:13.456 "raid_level": "raid1", 00:18:13.456 "superblock": false, 00:18:13.456 "num_base_bdevs": 4, 00:18:13.456 "num_base_bdevs_discovered": 1, 00:18:13.456 "num_base_bdevs_operational": 4, 00:18:13.456 "base_bdevs_list": [ 00:18:13.456 { 00:18:13.456 "name": "BaseBdev1", 00:18:13.456 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:13.456 "is_configured": true, 00:18:13.456 "data_offset": 0, 00:18:13.456 "data_size": 65536 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev2", 00:18:13.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.456 "is_configured": false, 00:18:13.456 "data_offset": 0, 00:18:13.456 "data_size": 0 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev3", 00:18:13.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.456 "is_configured": false, 00:18:13.456 "data_offset": 0, 00:18:13.456 "data_size": 0 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev4", 00:18:13.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.456 "is_configured": false, 00:18:13.456 "data_offset": 0, 00:18:13.456 "data_size": 0 00:18:13.456 } 00:18:13.456 ] 00:18:13.456 }' 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.456 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 [2024-12-06 13:12:20.436425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.022 [2024-12-06 13:12:20.436662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 [2024-12-06 13:12:20.444465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.022 [2024-12-06 13:12:20.447376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.022 [2024-12-06 13:12:20.447577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.022 [2024-12-06 13:12:20.447739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.022 [2024-12-06 13:12:20.447805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.022 [2024-12-06 13:12:20.448064] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:14.022 [2024-12-06 13:12:20.448127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.022 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.022 "name": "Existed_Raid", 00:18:14.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.023 "strip_size_kb": 0, 00:18:14.023 "state": "configuring", 00:18:14.023 "raid_level": "raid1", 00:18:14.023 "superblock": false, 00:18:14.023 "num_base_bdevs": 4, 00:18:14.023 "num_base_bdevs_discovered": 1, 00:18:14.023 "num_base_bdevs_operational": 4, 00:18:14.023 "base_bdevs_list": [ 00:18:14.023 { 00:18:14.023 "name": "BaseBdev1", 00:18:14.023 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:14.023 "is_configured": true, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 65536 00:18:14.023 }, 00:18:14.023 { 00:18:14.023 "name": "BaseBdev2", 00:18:14.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.023 "is_configured": false, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 0 00:18:14.023 }, 00:18:14.023 { 00:18:14.023 "name": "BaseBdev3", 00:18:14.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.023 "is_configured": false, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 0 00:18:14.023 }, 00:18:14.023 { 00:18:14.023 "name": "BaseBdev4", 00:18:14.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.023 "is_configured": false, 00:18:14.023 "data_offset": 0, 00:18:14.023 "data_size": 0 00:18:14.023 } 00:18:14.023 ] 00:18:14.023 }' 00:18:14.023 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.023 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.589 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.589 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 [2024-12-06 13:12:21.024993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.589 BaseBdev2 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 [ 00:18:14.589 { 00:18:14.589 "name": "BaseBdev2", 00:18:14.589 "aliases": [ 00:18:14.589 "9442fe23-26ca-4822-95ba-717447d58e39" 00:18:14.589 ], 00:18:14.589 "product_name": "Malloc disk", 00:18:14.589 "block_size": 512, 00:18:14.589 "num_blocks": 65536, 00:18:14.589 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:14.589 "assigned_rate_limits": { 00:18:14.589 "rw_ios_per_sec": 0, 00:18:14.589 "rw_mbytes_per_sec": 0, 00:18:14.589 "r_mbytes_per_sec": 0, 00:18:14.589 "w_mbytes_per_sec": 0 00:18:14.589 }, 00:18:14.589 "claimed": true, 00:18:14.589 "claim_type": "exclusive_write", 00:18:14.589 "zoned": false, 00:18:14.589 "supported_io_types": { 00:18:14.589 "read": true, 00:18:14.589 "write": true, 00:18:14.589 "unmap": true, 00:18:14.589 "flush": true, 00:18:14.589 "reset": true, 00:18:14.589 "nvme_admin": false, 00:18:14.589 "nvme_io": false, 00:18:14.589 "nvme_io_md": false, 00:18:14.589 "write_zeroes": true, 00:18:14.589 "zcopy": true, 00:18:14.589 "get_zone_info": false, 00:18:14.589 "zone_management": false, 00:18:14.589 "zone_append": false, 00:18:14.589 "compare": false, 00:18:14.589 "compare_and_write": false, 00:18:14.589 "abort": true, 00:18:14.589 "seek_hole": false, 00:18:14.589 "seek_data": false, 00:18:14.589 "copy": true, 00:18:14.589 "nvme_iov_md": false 00:18:14.589 }, 00:18:14.589 "memory_domains": [ 00:18:14.589 { 00:18:14.589 "dma_device_id": "system", 00:18:14.589 "dma_device_type": 1 00:18:14.589 }, 00:18:14.589 { 00:18:14.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.589 "dma_device_type": 2 00:18:14.589 } 00:18:14.589 ], 00:18:14.589 "driver_specific": {} 00:18:14.589 } 00:18:14.589 ] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.589 "name": "Existed_Raid", 00:18:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.589 "strip_size_kb": 0, 00:18:14.589 "state": "configuring", 00:18:14.589 "raid_level": "raid1", 00:18:14.589 "superblock": false, 00:18:14.589 "num_base_bdevs": 4, 00:18:14.589 "num_base_bdevs_discovered": 2, 00:18:14.589 "num_base_bdevs_operational": 4, 00:18:14.589 "base_bdevs_list": [ 00:18:14.589 { 00:18:14.589 "name": "BaseBdev1", 00:18:14.589 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:14.589 "is_configured": true, 00:18:14.589 "data_offset": 0, 00:18:14.589 "data_size": 65536 00:18:14.589 }, 00:18:14.589 { 00:18:14.589 "name": "BaseBdev2", 00:18:14.589 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:14.589 "is_configured": true, 00:18:14.589 "data_offset": 0, 00:18:14.589 "data_size": 65536 00:18:14.589 }, 00:18:14.589 { 00:18:14.589 "name": "BaseBdev3", 00:18:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.589 "is_configured": false, 00:18:14.589 "data_offset": 0, 00:18:14.589 "data_size": 0 00:18:14.589 }, 00:18:14.589 { 00:18:14.589 "name": "BaseBdev4", 00:18:14.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.589 "is_configured": false, 00:18:14.589 "data_offset": 0, 00:18:14.589 "data_size": 0 00:18:14.589 } 00:18:14.589 ] 00:18:14.589 }' 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.589 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 [2024-12-06 13:12:21.646304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:15.156 BaseBdev3 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.156 [ 00:18:15.156 { 00:18:15.156 "name": "BaseBdev3", 00:18:15.156 "aliases": [ 00:18:15.156 "5438ac6b-584f-4fe9-b589-dd60167c1b23" 00:18:15.156 ], 00:18:15.156 "product_name": "Malloc disk", 00:18:15.156 "block_size": 512, 00:18:15.156 "num_blocks": 65536, 00:18:15.156 "uuid": "5438ac6b-584f-4fe9-b589-dd60167c1b23", 00:18:15.156 "assigned_rate_limits": { 00:18:15.156 "rw_ios_per_sec": 0, 00:18:15.156 "rw_mbytes_per_sec": 0, 00:18:15.156 "r_mbytes_per_sec": 0, 00:18:15.156 "w_mbytes_per_sec": 0 00:18:15.156 }, 00:18:15.156 "claimed": true, 00:18:15.156 "claim_type": "exclusive_write", 00:18:15.156 "zoned": false, 00:18:15.156 "supported_io_types": { 00:18:15.156 "read": true, 00:18:15.156 "write": true, 00:18:15.156 "unmap": true, 00:18:15.156 "flush": true, 00:18:15.156 "reset": true, 00:18:15.156 "nvme_admin": false, 00:18:15.156 "nvme_io": false, 00:18:15.156 "nvme_io_md": false, 00:18:15.156 "write_zeroes": true, 00:18:15.156 "zcopy": true, 00:18:15.156 "get_zone_info": false, 00:18:15.156 "zone_management": false, 00:18:15.156 "zone_append": false, 00:18:15.156 "compare": false, 00:18:15.156 "compare_and_write": false, 00:18:15.156 "abort": true, 00:18:15.156 "seek_hole": false, 00:18:15.156 "seek_data": false, 00:18:15.156 "copy": true, 00:18:15.156 "nvme_iov_md": false 00:18:15.156 }, 00:18:15.156 "memory_domains": [ 00:18:15.156 { 00:18:15.156 "dma_device_id": "system", 00:18:15.156 "dma_device_type": 1 00:18:15.156 }, 00:18:15.156 { 00:18:15.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.156 "dma_device_type": 2 00:18:15.156 } 00:18:15.156 ], 00:18:15.156 "driver_specific": {} 00:18:15.156 } 00:18:15.156 ] 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.156 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.415 "name": "Existed_Raid", 00:18:15.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.415 "strip_size_kb": 0, 00:18:15.415 "state": "configuring", 00:18:15.415 "raid_level": "raid1", 00:18:15.415 "superblock": false, 00:18:15.415 "num_base_bdevs": 4, 00:18:15.415 "num_base_bdevs_discovered": 3, 00:18:15.415 "num_base_bdevs_operational": 4, 00:18:15.415 "base_bdevs_list": [ 00:18:15.415 { 00:18:15.415 "name": "BaseBdev1", 00:18:15.415 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:15.415 "is_configured": true, 00:18:15.415 "data_offset": 0, 00:18:15.415 "data_size": 65536 00:18:15.415 }, 00:18:15.415 { 00:18:15.415 "name": "BaseBdev2", 00:18:15.415 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:15.415 "is_configured": true, 00:18:15.415 "data_offset": 0, 00:18:15.415 "data_size": 65536 00:18:15.415 }, 00:18:15.415 { 00:18:15.415 "name": "BaseBdev3", 00:18:15.415 "uuid": "5438ac6b-584f-4fe9-b589-dd60167c1b23", 00:18:15.415 "is_configured": true, 00:18:15.415 "data_offset": 0, 00:18:15.415 "data_size": 65536 00:18:15.415 }, 00:18:15.415 { 00:18:15.415 "name": "BaseBdev4", 00:18:15.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.415 "is_configured": false, 00:18:15.415 "data_offset": 0, 00:18:15.415 "data_size": 0 00:18:15.415 } 00:18:15.415 ] 00:18:15.415 }' 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.415 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 [2024-12-06 13:12:22.278304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:15.981 [2024-12-06 13:12:22.278379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.981 [2024-12-06 13:12:22.278394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:15.981 [2024-12-06 13:12:22.278809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:15.981 [2024-12-06 13:12:22.279057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.981 [2024-12-06 13:12:22.279082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:15.981 [2024-12-06 13:12:22.279421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.981 BaseBdev4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 [ 00:18:15.981 { 00:18:15.981 "name": "BaseBdev4", 00:18:15.981 "aliases": [ 00:18:15.981 "9ff1bc33-638b-4d7a-87ae-2372da582c05" 00:18:15.981 ], 00:18:15.981 "product_name": "Malloc disk", 00:18:15.981 "block_size": 512, 00:18:15.981 "num_blocks": 65536, 00:18:15.981 "uuid": "9ff1bc33-638b-4d7a-87ae-2372da582c05", 00:18:15.981 "assigned_rate_limits": { 00:18:15.981 "rw_ios_per_sec": 0, 00:18:15.981 "rw_mbytes_per_sec": 0, 00:18:15.981 "r_mbytes_per_sec": 0, 00:18:15.981 "w_mbytes_per_sec": 0 00:18:15.981 }, 00:18:15.981 "claimed": true, 00:18:15.981 "claim_type": "exclusive_write", 00:18:15.981 "zoned": false, 00:18:15.981 "supported_io_types": { 00:18:15.981 "read": true, 00:18:15.981 "write": true, 00:18:15.981 "unmap": true, 00:18:15.981 "flush": true, 00:18:15.981 "reset": true, 00:18:15.981 "nvme_admin": false, 00:18:15.981 "nvme_io": false, 00:18:15.981 "nvme_io_md": false, 00:18:15.981 "write_zeroes": true, 00:18:15.981 "zcopy": true, 00:18:15.981 "get_zone_info": false, 00:18:15.981 "zone_management": false, 00:18:15.981 "zone_append": false, 00:18:15.981 "compare": false, 00:18:15.981 "compare_and_write": false, 00:18:15.981 "abort": true, 00:18:15.981 "seek_hole": false, 00:18:15.981 "seek_data": false, 00:18:15.981 "copy": true, 00:18:15.981 "nvme_iov_md": false 00:18:15.981 }, 00:18:15.981 "memory_domains": [ 00:18:15.981 { 00:18:15.981 "dma_device_id": "system", 00:18:15.981 "dma_device_type": 1 00:18:15.981 }, 00:18:15.981 { 00:18:15.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.981 "dma_device_type": 2 00:18:15.981 } 00:18:15.981 ], 00:18:15.981 "driver_specific": {} 00:18:15.981 } 00:18:15.981 ] 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.981 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.982 "name": "Existed_Raid", 00:18:15.982 "uuid": "ff689301-03c0-4b41-83ff-dd25f67f44cb", 00:18:15.982 "strip_size_kb": 0, 00:18:15.982 "state": "online", 00:18:15.982 "raid_level": "raid1", 00:18:15.982 "superblock": false, 00:18:15.982 "num_base_bdevs": 4, 00:18:15.982 "num_base_bdevs_discovered": 4, 00:18:15.982 "num_base_bdevs_operational": 4, 00:18:15.982 "base_bdevs_list": [ 00:18:15.982 { 00:18:15.982 "name": "BaseBdev1", 00:18:15.982 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:15.982 "is_configured": true, 00:18:15.982 "data_offset": 0, 00:18:15.982 "data_size": 65536 00:18:15.982 }, 00:18:15.982 { 00:18:15.982 "name": "BaseBdev2", 00:18:15.982 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:15.982 "is_configured": true, 00:18:15.982 "data_offset": 0, 00:18:15.982 "data_size": 65536 00:18:15.982 }, 00:18:15.982 { 00:18:15.982 "name": "BaseBdev3", 00:18:15.982 "uuid": "5438ac6b-584f-4fe9-b589-dd60167c1b23", 00:18:15.982 "is_configured": true, 00:18:15.982 "data_offset": 0, 00:18:15.982 "data_size": 65536 00:18:15.982 }, 00:18:15.982 { 00:18:15.982 "name": "BaseBdev4", 00:18:15.982 "uuid": "9ff1bc33-638b-4d7a-87ae-2372da582c05", 00:18:15.982 "is_configured": true, 00:18:15.982 "data_offset": 0, 00:18:15.982 "data_size": 65536 00:18:15.982 } 00:18:15.982 ] 00:18:15.982 }' 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.982 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.547 [2024-12-06 13:12:22.818984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.547 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:16.547 "name": "Existed_Raid", 00:18:16.547 "aliases": [ 00:18:16.547 "ff689301-03c0-4b41-83ff-dd25f67f44cb" 00:18:16.547 ], 00:18:16.547 "product_name": "Raid Volume", 00:18:16.547 "block_size": 512, 00:18:16.547 "num_blocks": 65536, 00:18:16.547 "uuid": "ff689301-03c0-4b41-83ff-dd25f67f44cb", 00:18:16.547 "assigned_rate_limits": { 00:18:16.547 "rw_ios_per_sec": 0, 00:18:16.547 "rw_mbytes_per_sec": 0, 00:18:16.547 "r_mbytes_per_sec": 0, 00:18:16.548 "w_mbytes_per_sec": 0 00:18:16.548 }, 00:18:16.548 "claimed": false, 00:18:16.548 "zoned": false, 00:18:16.548 "supported_io_types": { 00:18:16.548 "read": true, 00:18:16.548 "write": true, 00:18:16.548 "unmap": false, 00:18:16.548 "flush": false, 00:18:16.548 "reset": true, 00:18:16.548 "nvme_admin": false, 00:18:16.548 "nvme_io": false, 00:18:16.548 "nvme_io_md": false, 00:18:16.548 "write_zeroes": true, 00:18:16.548 "zcopy": false, 00:18:16.548 "get_zone_info": false, 00:18:16.548 "zone_management": false, 00:18:16.548 "zone_append": false, 00:18:16.548 "compare": false, 00:18:16.548 "compare_and_write": false, 00:18:16.548 "abort": false, 00:18:16.548 "seek_hole": false, 00:18:16.548 "seek_data": false, 00:18:16.548 "copy": false, 00:18:16.548 "nvme_iov_md": false 00:18:16.548 }, 00:18:16.548 "memory_domains": [ 00:18:16.548 { 00:18:16.548 "dma_device_id": "system", 00:18:16.548 "dma_device_type": 1 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.548 "dma_device_type": 2 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "system", 00:18:16.548 "dma_device_type": 1 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.548 "dma_device_type": 2 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "system", 00:18:16.548 "dma_device_type": 1 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.548 "dma_device_type": 2 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "system", 00:18:16.548 "dma_device_type": 1 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.548 "dma_device_type": 2 00:18:16.548 } 00:18:16.548 ], 00:18:16.548 "driver_specific": { 00:18:16.548 "raid": { 00:18:16.548 "uuid": "ff689301-03c0-4b41-83ff-dd25f67f44cb", 00:18:16.548 "strip_size_kb": 0, 00:18:16.548 "state": "online", 00:18:16.548 "raid_level": "raid1", 00:18:16.548 "superblock": false, 00:18:16.548 "num_base_bdevs": 4, 00:18:16.548 "num_base_bdevs_discovered": 4, 00:18:16.548 "num_base_bdevs_operational": 4, 00:18:16.548 "base_bdevs_list": [ 00:18:16.548 { 00:18:16.548 "name": "BaseBdev1", 00:18:16.548 "uuid": "c6ebd7f1-2b8b-41f2-9a6a-165bd8c08fa6", 00:18:16.548 "is_configured": true, 00:18:16.548 "data_offset": 0, 00:18:16.548 "data_size": 65536 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "name": "BaseBdev2", 00:18:16.548 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:16.548 "is_configured": true, 00:18:16.548 "data_offset": 0, 00:18:16.548 "data_size": 65536 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "name": "BaseBdev3", 00:18:16.548 "uuid": "5438ac6b-584f-4fe9-b589-dd60167c1b23", 00:18:16.548 "is_configured": true, 00:18:16.548 "data_offset": 0, 00:18:16.548 "data_size": 65536 00:18:16.548 }, 00:18:16.548 { 00:18:16.548 "name": "BaseBdev4", 00:18:16.548 "uuid": "9ff1bc33-638b-4d7a-87ae-2372da582c05", 00:18:16.548 "is_configured": true, 00:18:16.548 "data_offset": 0, 00:18:16.548 "data_size": 65536 00:18:16.548 } 00:18:16.548 ] 00:18:16.548 } 00:18:16.548 } 00:18:16.548 }' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:16.548 BaseBdev2 00:18:16.548 BaseBdev3 00:18:16.548 BaseBdev4' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.548 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.548 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.807 [2024-12-06 13:12:23.218697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.807 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.066 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.066 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.066 "name": "Existed_Raid", 00:18:17.066 "uuid": "ff689301-03c0-4b41-83ff-dd25f67f44cb", 00:18:17.066 "strip_size_kb": 0, 00:18:17.066 "state": "online", 00:18:17.066 "raid_level": "raid1", 00:18:17.066 "superblock": false, 00:18:17.066 "num_base_bdevs": 4, 00:18:17.066 "num_base_bdevs_discovered": 3, 00:18:17.066 "num_base_bdevs_operational": 3, 00:18:17.066 "base_bdevs_list": [ 00:18:17.066 { 00:18:17.066 "name": null, 00:18:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.066 "is_configured": false, 00:18:17.066 "data_offset": 0, 00:18:17.066 "data_size": 65536 00:18:17.066 }, 00:18:17.066 { 00:18:17.066 "name": "BaseBdev2", 00:18:17.066 "uuid": "9442fe23-26ca-4822-95ba-717447d58e39", 00:18:17.066 "is_configured": true, 00:18:17.066 "data_offset": 0, 00:18:17.066 "data_size": 65536 00:18:17.066 }, 00:18:17.066 { 00:18:17.066 "name": "BaseBdev3", 00:18:17.066 "uuid": "5438ac6b-584f-4fe9-b589-dd60167c1b23", 00:18:17.066 "is_configured": true, 00:18:17.066 "data_offset": 0, 00:18:17.066 "data_size": 65536 00:18:17.066 }, 00:18:17.066 { 00:18:17.066 "name": "BaseBdev4", 00:18:17.066 "uuid": "9ff1bc33-638b-4d7a-87ae-2372da582c05", 00:18:17.066 "is_configured": true, 00:18:17.066 "data_offset": 0, 00:18:17.066 "data_size": 65536 00:18:17.066 } 00:18:17.066 ] 00:18:17.066 }' 00:18:17.066 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.066 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.633 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 [2024-12-06 13:12:23.934274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.633 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.633 [2024-12-06 13:12:24.081862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.892 [2024-12-06 13:12:24.236601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:17.892 [2024-12-06 13:12:24.236951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.892 [2024-12-06 13:12:24.324769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.892 [2024-12-06 13:12:24.324837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.892 [2024-12-06 13:12:24.324858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.892 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 BaseBdev2 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 [ 00:18:18.151 { 00:18:18.151 "name": "BaseBdev2", 00:18:18.151 "aliases": [ 00:18:18.151 "4e284fe1-c976-40db-9851-b19fdb121d25" 00:18:18.151 ], 00:18:18.151 "product_name": "Malloc disk", 00:18:18.151 "block_size": 512, 00:18:18.151 "num_blocks": 65536, 00:18:18.151 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:18.151 "assigned_rate_limits": { 00:18:18.151 "rw_ios_per_sec": 0, 00:18:18.151 "rw_mbytes_per_sec": 0, 00:18:18.151 "r_mbytes_per_sec": 0, 00:18:18.151 "w_mbytes_per_sec": 0 00:18:18.151 }, 00:18:18.151 "claimed": false, 00:18:18.151 "zoned": false, 00:18:18.151 "supported_io_types": { 00:18:18.151 "read": true, 00:18:18.151 "write": true, 00:18:18.151 "unmap": true, 00:18:18.151 "flush": true, 00:18:18.151 "reset": true, 00:18:18.151 "nvme_admin": false, 00:18:18.151 "nvme_io": false, 00:18:18.151 "nvme_io_md": false, 00:18:18.151 "write_zeroes": true, 00:18:18.151 "zcopy": true, 00:18:18.151 "get_zone_info": false, 00:18:18.151 "zone_management": false, 00:18:18.151 "zone_append": false, 00:18:18.151 "compare": false, 00:18:18.151 "compare_and_write": false, 00:18:18.151 "abort": true, 00:18:18.151 "seek_hole": false, 00:18:18.151 "seek_data": false, 00:18:18.151 "copy": true, 00:18:18.151 "nvme_iov_md": false 00:18:18.151 }, 00:18:18.151 "memory_domains": [ 00:18:18.151 { 00:18:18.151 "dma_device_id": "system", 00:18:18.151 "dma_device_type": 1 00:18:18.151 }, 00:18:18.151 { 00:18:18.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.151 "dma_device_type": 2 00:18:18.151 } 00:18:18.151 ], 00:18:18.151 "driver_specific": {} 00:18:18.151 } 00:18:18.151 ] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 BaseBdev3 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:18.151 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 [ 00:18:18.152 { 00:18:18.152 "name": "BaseBdev3", 00:18:18.152 "aliases": [ 00:18:18.152 "b99a15d0-727b-4c4e-a522-b7b0e1f204ca" 00:18:18.152 ], 00:18:18.152 "product_name": "Malloc disk", 00:18:18.152 "block_size": 512, 00:18:18.152 "num_blocks": 65536, 00:18:18.152 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:18.152 "assigned_rate_limits": { 00:18:18.152 "rw_ios_per_sec": 0, 00:18:18.152 "rw_mbytes_per_sec": 0, 00:18:18.152 "r_mbytes_per_sec": 0, 00:18:18.152 "w_mbytes_per_sec": 0 00:18:18.152 }, 00:18:18.152 "claimed": false, 00:18:18.152 "zoned": false, 00:18:18.152 "supported_io_types": { 00:18:18.152 "read": true, 00:18:18.152 "write": true, 00:18:18.152 "unmap": true, 00:18:18.152 "flush": true, 00:18:18.152 "reset": true, 00:18:18.152 "nvme_admin": false, 00:18:18.152 "nvme_io": false, 00:18:18.152 "nvme_io_md": false, 00:18:18.152 "write_zeroes": true, 00:18:18.152 "zcopy": true, 00:18:18.152 "get_zone_info": false, 00:18:18.152 "zone_management": false, 00:18:18.152 "zone_append": false, 00:18:18.152 "compare": false, 00:18:18.152 "compare_and_write": false, 00:18:18.152 "abort": true, 00:18:18.152 "seek_hole": false, 00:18:18.152 "seek_data": false, 00:18:18.152 "copy": true, 00:18:18.152 "nvme_iov_md": false 00:18:18.152 }, 00:18:18.152 "memory_domains": [ 00:18:18.152 { 00:18:18.152 "dma_device_id": "system", 00:18:18.152 "dma_device_type": 1 00:18:18.152 }, 00:18:18.152 { 00:18:18.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.152 "dma_device_type": 2 00:18:18.152 } 00:18:18.152 ], 00:18:18.152 "driver_specific": {} 00:18:18.152 } 00:18:18.152 ] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 BaseBdev4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 [ 00:18:18.152 { 00:18:18.152 "name": "BaseBdev4", 00:18:18.152 "aliases": [ 00:18:18.152 "7dc3e782-07f4-43a7-9cc3-7423e89e7917" 00:18:18.152 ], 00:18:18.152 "product_name": "Malloc disk", 00:18:18.152 "block_size": 512, 00:18:18.152 "num_blocks": 65536, 00:18:18.152 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:18.152 "assigned_rate_limits": { 00:18:18.152 "rw_ios_per_sec": 0, 00:18:18.152 "rw_mbytes_per_sec": 0, 00:18:18.152 "r_mbytes_per_sec": 0, 00:18:18.152 "w_mbytes_per_sec": 0 00:18:18.152 }, 00:18:18.152 "claimed": false, 00:18:18.152 "zoned": false, 00:18:18.152 "supported_io_types": { 00:18:18.152 "read": true, 00:18:18.152 "write": true, 00:18:18.152 "unmap": true, 00:18:18.152 "flush": true, 00:18:18.152 "reset": true, 00:18:18.152 "nvme_admin": false, 00:18:18.152 "nvme_io": false, 00:18:18.152 "nvme_io_md": false, 00:18:18.152 "write_zeroes": true, 00:18:18.152 "zcopy": true, 00:18:18.152 "get_zone_info": false, 00:18:18.152 "zone_management": false, 00:18:18.152 "zone_append": false, 00:18:18.152 "compare": false, 00:18:18.152 "compare_and_write": false, 00:18:18.152 "abort": true, 00:18:18.152 "seek_hole": false, 00:18:18.152 "seek_data": false, 00:18:18.152 "copy": true, 00:18:18.152 "nvme_iov_md": false 00:18:18.152 }, 00:18:18.152 "memory_domains": [ 00:18:18.152 { 00:18:18.152 "dma_device_id": "system", 00:18:18.152 "dma_device_type": 1 00:18:18.152 }, 00:18:18.152 { 00:18:18.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.152 "dma_device_type": 2 00:18:18.152 } 00:18:18.152 ], 00:18:18.152 "driver_specific": {} 00:18:18.152 } 00:18:18.152 ] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 [2024-12-06 13:12:24.616311] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.152 [2024-12-06 13:12:24.616591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.152 [2024-12-06 13:12:24.616734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.152 [2024-12-06 13:12:24.619393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.152 [2024-12-06 13:12:24.619630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.152 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.152 "name": "Existed_Raid", 00:18:18.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.152 "strip_size_kb": 0, 00:18:18.152 "state": "configuring", 00:18:18.152 "raid_level": "raid1", 00:18:18.152 "superblock": false, 00:18:18.152 "num_base_bdevs": 4, 00:18:18.152 "num_base_bdevs_discovered": 3, 00:18:18.152 "num_base_bdevs_operational": 4, 00:18:18.152 "base_bdevs_list": [ 00:18:18.152 { 00:18:18.152 "name": "BaseBdev1", 00:18:18.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.152 "is_configured": false, 00:18:18.152 "data_offset": 0, 00:18:18.152 "data_size": 0 00:18:18.152 }, 00:18:18.152 { 00:18:18.152 "name": "BaseBdev2", 00:18:18.152 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:18.152 "is_configured": true, 00:18:18.152 "data_offset": 0, 00:18:18.153 "data_size": 65536 00:18:18.153 }, 00:18:18.153 { 00:18:18.153 "name": "BaseBdev3", 00:18:18.153 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:18.153 "is_configured": true, 00:18:18.153 "data_offset": 0, 00:18:18.153 "data_size": 65536 00:18:18.153 }, 00:18:18.153 { 00:18:18.153 "name": "BaseBdev4", 00:18:18.153 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:18.153 "is_configured": true, 00:18:18.153 "data_offset": 0, 00:18:18.153 "data_size": 65536 00:18:18.153 } 00:18:18.153 ] 00:18:18.153 }' 00:18:18.153 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.153 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.719 [2024-12-06 13:12:25.108585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.719 "name": "Existed_Raid", 00:18:18.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.719 "strip_size_kb": 0, 00:18:18.719 "state": "configuring", 00:18:18.719 "raid_level": "raid1", 00:18:18.719 "superblock": false, 00:18:18.719 "num_base_bdevs": 4, 00:18:18.719 "num_base_bdevs_discovered": 2, 00:18:18.719 "num_base_bdevs_operational": 4, 00:18:18.719 "base_bdevs_list": [ 00:18:18.719 { 00:18:18.719 "name": "BaseBdev1", 00:18:18.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.719 "is_configured": false, 00:18:18.719 "data_offset": 0, 00:18:18.719 "data_size": 0 00:18:18.719 }, 00:18:18.719 { 00:18:18.719 "name": null, 00:18:18.719 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:18.719 "is_configured": false, 00:18:18.719 "data_offset": 0, 00:18:18.719 "data_size": 65536 00:18:18.719 }, 00:18:18.719 { 00:18:18.719 "name": "BaseBdev3", 00:18:18.719 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:18.719 "is_configured": true, 00:18:18.719 "data_offset": 0, 00:18:18.719 "data_size": 65536 00:18:18.719 }, 00:18:18.719 { 00:18:18.719 "name": "BaseBdev4", 00:18:18.719 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:18.719 "is_configured": true, 00:18:18.719 "data_offset": 0, 00:18:18.719 "data_size": 65536 00:18:18.719 } 00:18:18.719 ] 00:18:18.719 }' 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.719 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.285 [2024-12-06 13:12:25.684081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.285 BaseBdev1 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.285 [ 00:18:19.285 { 00:18:19.285 "name": "BaseBdev1", 00:18:19.285 "aliases": [ 00:18:19.285 "a56713c0-94b4-462d-bdb1-07c78cbffcd3" 00:18:19.285 ], 00:18:19.285 "product_name": "Malloc disk", 00:18:19.285 "block_size": 512, 00:18:19.285 "num_blocks": 65536, 00:18:19.285 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:19.285 "assigned_rate_limits": { 00:18:19.285 "rw_ios_per_sec": 0, 00:18:19.285 "rw_mbytes_per_sec": 0, 00:18:19.285 "r_mbytes_per_sec": 0, 00:18:19.285 "w_mbytes_per_sec": 0 00:18:19.285 }, 00:18:19.285 "claimed": true, 00:18:19.285 "claim_type": "exclusive_write", 00:18:19.285 "zoned": false, 00:18:19.285 "supported_io_types": { 00:18:19.285 "read": true, 00:18:19.285 "write": true, 00:18:19.285 "unmap": true, 00:18:19.285 "flush": true, 00:18:19.285 "reset": true, 00:18:19.285 "nvme_admin": false, 00:18:19.285 "nvme_io": false, 00:18:19.285 "nvme_io_md": false, 00:18:19.285 "write_zeroes": true, 00:18:19.285 "zcopy": true, 00:18:19.285 "get_zone_info": false, 00:18:19.285 "zone_management": false, 00:18:19.285 "zone_append": false, 00:18:19.285 "compare": false, 00:18:19.285 "compare_and_write": false, 00:18:19.285 "abort": true, 00:18:19.285 "seek_hole": false, 00:18:19.285 "seek_data": false, 00:18:19.285 "copy": true, 00:18:19.285 "nvme_iov_md": false 00:18:19.285 }, 00:18:19.285 "memory_domains": [ 00:18:19.285 { 00:18:19.285 "dma_device_id": "system", 00:18:19.285 "dma_device_type": 1 00:18:19.285 }, 00:18:19.285 { 00:18:19.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.285 "dma_device_type": 2 00:18:19.285 } 00:18:19.285 ], 00:18:19.285 "driver_specific": {} 00:18:19.285 } 00:18:19.285 ] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:19.285 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.286 "name": "Existed_Raid", 00:18:19.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.286 "strip_size_kb": 0, 00:18:19.286 "state": "configuring", 00:18:19.286 "raid_level": "raid1", 00:18:19.286 "superblock": false, 00:18:19.286 "num_base_bdevs": 4, 00:18:19.286 "num_base_bdevs_discovered": 3, 00:18:19.286 "num_base_bdevs_operational": 4, 00:18:19.286 "base_bdevs_list": [ 00:18:19.286 { 00:18:19.286 "name": "BaseBdev1", 00:18:19.286 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:19.286 "is_configured": true, 00:18:19.286 "data_offset": 0, 00:18:19.286 "data_size": 65536 00:18:19.286 }, 00:18:19.286 { 00:18:19.286 "name": null, 00:18:19.286 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:19.286 "is_configured": false, 00:18:19.286 "data_offset": 0, 00:18:19.286 "data_size": 65536 00:18:19.286 }, 00:18:19.286 { 00:18:19.286 "name": "BaseBdev3", 00:18:19.286 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:19.286 "is_configured": true, 00:18:19.286 "data_offset": 0, 00:18:19.286 "data_size": 65536 00:18:19.286 }, 00:18:19.286 { 00:18:19.286 "name": "BaseBdev4", 00:18:19.286 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:19.286 "is_configured": true, 00:18:19.286 "data_offset": 0, 00:18:19.286 "data_size": 65536 00:18:19.286 } 00:18:19.286 ] 00:18:19.286 }' 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.286 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.850 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:19.850 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 [2024-12-06 13:12:26.296427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.851 "name": "Existed_Raid", 00:18:19.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.851 "strip_size_kb": 0, 00:18:19.851 "state": "configuring", 00:18:19.851 "raid_level": "raid1", 00:18:19.851 "superblock": false, 00:18:19.851 "num_base_bdevs": 4, 00:18:19.851 "num_base_bdevs_discovered": 2, 00:18:19.851 "num_base_bdevs_operational": 4, 00:18:19.851 "base_bdevs_list": [ 00:18:19.851 { 00:18:19.851 "name": "BaseBdev1", 00:18:19.851 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:19.851 "is_configured": true, 00:18:19.851 "data_offset": 0, 00:18:19.851 "data_size": 65536 00:18:19.851 }, 00:18:19.851 { 00:18:19.851 "name": null, 00:18:19.851 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:19.851 "is_configured": false, 00:18:19.851 "data_offset": 0, 00:18:19.851 "data_size": 65536 00:18:19.851 }, 00:18:19.851 { 00:18:19.851 "name": null, 00:18:19.851 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:19.851 "is_configured": false, 00:18:19.851 "data_offset": 0, 00:18:19.851 "data_size": 65536 00:18:19.851 }, 00:18:19.851 { 00:18:19.851 "name": "BaseBdev4", 00:18:19.851 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:19.851 "is_configured": true, 00:18:19.851 "data_offset": 0, 00:18:19.851 "data_size": 65536 00:18:19.851 } 00:18:19.851 ] 00:18:19.851 }' 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.851 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.519 [2024-12-06 13:12:26.900671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.519 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.519 "name": "Existed_Raid", 00:18:20.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.519 "strip_size_kb": 0, 00:18:20.519 "state": "configuring", 00:18:20.519 "raid_level": "raid1", 00:18:20.519 "superblock": false, 00:18:20.519 "num_base_bdevs": 4, 00:18:20.519 "num_base_bdevs_discovered": 3, 00:18:20.519 "num_base_bdevs_operational": 4, 00:18:20.520 "base_bdevs_list": [ 00:18:20.520 { 00:18:20.520 "name": "BaseBdev1", 00:18:20.520 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:20.520 "is_configured": true, 00:18:20.520 "data_offset": 0, 00:18:20.520 "data_size": 65536 00:18:20.520 }, 00:18:20.520 { 00:18:20.520 "name": null, 00:18:20.520 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:20.520 "is_configured": false, 00:18:20.520 "data_offset": 0, 00:18:20.520 "data_size": 65536 00:18:20.520 }, 00:18:20.520 { 00:18:20.520 "name": "BaseBdev3", 00:18:20.520 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:20.520 "is_configured": true, 00:18:20.520 "data_offset": 0, 00:18:20.520 "data_size": 65536 00:18:20.520 }, 00:18:20.520 { 00:18:20.520 "name": "BaseBdev4", 00:18:20.520 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:20.520 "is_configured": true, 00:18:20.520 "data_offset": 0, 00:18:20.520 "data_size": 65536 00:18:20.520 } 00:18:20.520 ] 00:18:20.520 }' 00:18:20.520 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.520 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.085 [2024-12-06 13:12:27.480952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.085 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.086 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.086 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.086 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.343 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.343 "name": "Existed_Raid", 00:18:21.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.343 "strip_size_kb": 0, 00:18:21.343 "state": "configuring", 00:18:21.343 "raid_level": "raid1", 00:18:21.343 "superblock": false, 00:18:21.343 "num_base_bdevs": 4, 00:18:21.343 "num_base_bdevs_discovered": 2, 00:18:21.343 "num_base_bdevs_operational": 4, 00:18:21.343 "base_bdevs_list": [ 00:18:21.343 { 00:18:21.343 "name": null, 00:18:21.343 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:21.343 "is_configured": false, 00:18:21.343 "data_offset": 0, 00:18:21.343 "data_size": 65536 00:18:21.343 }, 00:18:21.343 { 00:18:21.343 "name": null, 00:18:21.343 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:21.343 "is_configured": false, 00:18:21.343 "data_offset": 0, 00:18:21.343 "data_size": 65536 00:18:21.343 }, 00:18:21.343 { 00:18:21.343 "name": "BaseBdev3", 00:18:21.343 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:21.343 "is_configured": true, 00:18:21.343 "data_offset": 0, 00:18:21.343 "data_size": 65536 00:18:21.343 }, 00:18:21.343 { 00:18:21.343 "name": "BaseBdev4", 00:18:21.343 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:21.343 "is_configured": true, 00:18:21.343 "data_offset": 0, 00:18:21.343 "data_size": 65536 00:18:21.343 } 00:18:21.343 ] 00:18:21.343 }' 00:18:21.343 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.343 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.600 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.600 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:21.600 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.601 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.858 [2024-12-06 13:12:28.127595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.858 "name": "Existed_Raid", 00:18:21.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.858 "strip_size_kb": 0, 00:18:21.858 "state": "configuring", 00:18:21.858 "raid_level": "raid1", 00:18:21.858 "superblock": false, 00:18:21.858 "num_base_bdevs": 4, 00:18:21.858 "num_base_bdevs_discovered": 3, 00:18:21.858 "num_base_bdevs_operational": 4, 00:18:21.858 "base_bdevs_list": [ 00:18:21.858 { 00:18:21.858 "name": null, 00:18:21.858 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:21.858 "is_configured": false, 00:18:21.858 "data_offset": 0, 00:18:21.858 "data_size": 65536 00:18:21.858 }, 00:18:21.858 { 00:18:21.858 "name": "BaseBdev2", 00:18:21.858 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:21.858 "is_configured": true, 00:18:21.858 "data_offset": 0, 00:18:21.858 "data_size": 65536 00:18:21.858 }, 00:18:21.858 { 00:18:21.858 "name": "BaseBdev3", 00:18:21.858 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:21.858 "is_configured": true, 00:18:21.858 "data_offset": 0, 00:18:21.858 "data_size": 65536 00:18:21.858 }, 00:18:21.858 { 00:18:21.858 "name": "BaseBdev4", 00:18:21.858 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:21.858 "is_configured": true, 00:18:21.858 "data_offset": 0, 00:18:21.858 "data_size": 65536 00:18:21.858 } 00:18:21.858 ] 00:18:21.858 }' 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.858 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.115 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.115 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.115 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.115 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a56713c0-94b4-462d-bdb1-07c78cbffcd3 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 [2024-12-06 13:12:28.773789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:22.374 [2024-12-06 13:12:28.773894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.374 [2024-12-06 13:12:28.773912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:22.374 [2024-12-06 13:12:28.774278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:22.374 [2024-12-06 13:12:28.774576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.374 [2024-12-06 13:12:28.774593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:22.374 [2024-12-06 13:12:28.775007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.374 NewBaseBdev 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 [ 00:18:22.374 { 00:18:22.374 "name": "NewBaseBdev", 00:18:22.374 "aliases": [ 00:18:22.374 "a56713c0-94b4-462d-bdb1-07c78cbffcd3" 00:18:22.374 ], 00:18:22.374 "product_name": "Malloc disk", 00:18:22.374 "block_size": 512, 00:18:22.374 "num_blocks": 65536, 00:18:22.374 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:22.374 "assigned_rate_limits": { 00:18:22.374 "rw_ios_per_sec": 0, 00:18:22.374 "rw_mbytes_per_sec": 0, 00:18:22.374 "r_mbytes_per_sec": 0, 00:18:22.374 "w_mbytes_per_sec": 0 00:18:22.374 }, 00:18:22.374 "claimed": true, 00:18:22.374 "claim_type": "exclusive_write", 00:18:22.374 "zoned": false, 00:18:22.374 "supported_io_types": { 00:18:22.374 "read": true, 00:18:22.374 "write": true, 00:18:22.374 "unmap": true, 00:18:22.374 "flush": true, 00:18:22.374 "reset": true, 00:18:22.374 "nvme_admin": false, 00:18:22.374 "nvme_io": false, 00:18:22.374 "nvme_io_md": false, 00:18:22.374 "write_zeroes": true, 00:18:22.374 "zcopy": true, 00:18:22.374 "get_zone_info": false, 00:18:22.374 "zone_management": false, 00:18:22.374 "zone_append": false, 00:18:22.374 "compare": false, 00:18:22.374 "compare_and_write": false, 00:18:22.374 "abort": true, 00:18:22.374 "seek_hole": false, 00:18:22.374 "seek_data": false, 00:18:22.374 "copy": true, 00:18:22.374 "nvme_iov_md": false 00:18:22.374 }, 00:18:22.374 "memory_domains": [ 00:18:22.374 { 00:18:22.374 "dma_device_id": "system", 00:18:22.374 "dma_device_type": 1 00:18:22.374 }, 00:18:22.374 { 00:18:22.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.374 "dma_device_type": 2 00:18:22.374 } 00:18:22.374 ], 00:18:22.374 "driver_specific": {} 00:18:22.374 } 00:18:22.374 ] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.374 "name": "Existed_Raid", 00:18:22.374 "uuid": "0ec4782d-8544-4d92-ad7f-7a950eb78ef5", 00:18:22.374 "strip_size_kb": 0, 00:18:22.374 "state": "online", 00:18:22.374 "raid_level": "raid1", 00:18:22.374 "superblock": false, 00:18:22.374 "num_base_bdevs": 4, 00:18:22.374 "num_base_bdevs_discovered": 4, 00:18:22.374 "num_base_bdevs_operational": 4, 00:18:22.374 "base_bdevs_list": [ 00:18:22.374 { 00:18:22.374 "name": "NewBaseBdev", 00:18:22.374 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:22.374 "is_configured": true, 00:18:22.374 "data_offset": 0, 00:18:22.374 "data_size": 65536 00:18:22.374 }, 00:18:22.374 { 00:18:22.374 "name": "BaseBdev2", 00:18:22.374 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:22.374 "is_configured": true, 00:18:22.374 "data_offset": 0, 00:18:22.374 "data_size": 65536 00:18:22.374 }, 00:18:22.374 { 00:18:22.374 "name": "BaseBdev3", 00:18:22.374 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:22.374 "is_configured": true, 00:18:22.374 "data_offset": 0, 00:18:22.374 "data_size": 65536 00:18:22.374 }, 00:18:22.374 { 00:18:22.374 "name": "BaseBdev4", 00:18:22.374 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:22.374 "is_configured": true, 00:18:22.374 "data_offset": 0, 00:18:22.374 "data_size": 65536 00:18:22.374 } 00:18:22.374 ] 00:18:22.374 }' 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.374 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 [2024-12-06 13:12:29.318582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.940 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.940 "name": "Existed_Raid", 00:18:22.940 "aliases": [ 00:18:22.940 "0ec4782d-8544-4d92-ad7f-7a950eb78ef5" 00:18:22.940 ], 00:18:22.940 "product_name": "Raid Volume", 00:18:22.940 "block_size": 512, 00:18:22.940 "num_blocks": 65536, 00:18:22.940 "uuid": "0ec4782d-8544-4d92-ad7f-7a950eb78ef5", 00:18:22.940 "assigned_rate_limits": { 00:18:22.940 "rw_ios_per_sec": 0, 00:18:22.940 "rw_mbytes_per_sec": 0, 00:18:22.940 "r_mbytes_per_sec": 0, 00:18:22.940 "w_mbytes_per_sec": 0 00:18:22.940 }, 00:18:22.940 "claimed": false, 00:18:22.940 "zoned": false, 00:18:22.940 "supported_io_types": { 00:18:22.940 "read": true, 00:18:22.940 "write": true, 00:18:22.940 "unmap": false, 00:18:22.940 "flush": false, 00:18:22.940 "reset": true, 00:18:22.940 "nvme_admin": false, 00:18:22.940 "nvme_io": false, 00:18:22.940 "nvme_io_md": false, 00:18:22.940 "write_zeroes": true, 00:18:22.940 "zcopy": false, 00:18:22.940 "get_zone_info": false, 00:18:22.940 "zone_management": false, 00:18:22.940 "zone_append": false, 00:18:22.940 "compare": false, 00:18:22.940 "compare_and_write": false, 00:18:22.940 "abort": false, 00:18:22.940 "seek_hole": false, 00:18:22.940 "seek_data": false, 00:18:22.940 "copy": false, 00:18:22.940 "nvme_iov_md": false 00:18:22.940 }, 00:18:22.940 "memory_domains": [ 00:18:22.940 { 00:18:22.940 "dma_device_id": "system", 00:18:22.940 "dma_device_type": 1 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.940 "dma_device_type": 2 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "dma_device_id": "system", 00:18:22.940 "dma_device_type": 1 00:18:22.940 }, 00:18:22.940 { 00:18:22.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.941 "dma_device_type": 2 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "dma_device_id": "system", 00:18:22.941 "dma_device_type": 1 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.941 "dma_device_type": 2 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "dma_device_id": "system", 00:18:22.941 "dma_device_type": 1 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.941 "dma_device_type": 2 00:18:22.941 } 00:18:22.941 ], 00:18:22.941 "driver_specific": { 00:18:22.941 "raid": { 00:18:22.941 "uuid": "0ec4782d-8544-4d92-ad7f-7a950eb78ef5", 00:18:22.941 "strip_size_kb": 0, 00:18:22.941 "state": "online", 00:18:22.941 "raid_level": "raid1", 00:18:22.941 "superblock": false, 00:18:22.941 "num_base_bdevs": 4, 00:18:22.941 "num_base_bdevs_discovered": 4, 00:18:22.941 "num_base_bdevs_operational": 4, 00:18:22.941 "base_bdevs_list": [ 00:18:22.941 { 00:18:22.941 "name": "NewBaseBdev", 00:18:22.941 "uuid": "a56713c0-94b4-462d-bdb1-07c78cbffcd3", 00:18:22.941 "is_configured": true, 00:18:22.941 "data_offset": 0, 00:18:22.941 "data_size": 65536 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "name": "BaseBdev2", 00:18:22.941 "uuid": "4e284fe1-c976-40db-9851-b19fdb121d25", 00:18:22.941 "is_configured": true, 00:18:22.941 "data_offset": 0, 00:18:22.941 "data_size": 65536 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "name": "BaseBdev3", 00:18:22.941 "uuid": "b99a15d0-727b-4c4e-a522-b7b0e1f204ca", 00:18:22.941 "is_configured": true, 00:18:22.941 "data_offset": 0, 00:18:22.941 "data_size": 65536 00:18:22.941 }, 00:18:22.941 { 00:18:22.941 "name": "BaseBdev4", 00:18:22.941 "uuid": "7dc3e782-07f4-43a7-9cc3-7423e89e7917", 00:18:22.941 "is_configured": true, 00:18:22.941 "data_offset": 0, 00:18:22.941 "data_size": 65536 00:18:22.941 } 00:18:22.941 ] 00:18:22.941 } 00:18:22.941 } 00:18:22.941 }' 00:18:22.941 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.941 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:22.941 BaseBdev2 00:18:22.941 BaseBdev3 00:18:22.941 BaseBdev4' 00:18:22.941 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.199 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 [2024-12-06 13:12:29.694194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.200 [2024-12-06 13:12:29.694382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.200 [2024-12-06 13:12:29.694661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.200 [2024-12-06 13:12:29.695113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.200 [2024-12-06 13:12:29.695142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73603 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73603 ']' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73603 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.200 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73603 00:18:23.458 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.458 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.458 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73603' 00:18:23.458 killing process with pid 73603 00:18:23.458 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73603 00:18:23.458 [2024-12-06 13:12:29.734748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.458 13:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73603 00:18:23.716 [2024-12-06 13:12:30.131777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.150 00:18:25.150 real 0m13.190s 00:18:25.150 user 0m21.580s 00:18:25.150 sys 0m2.012s 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.150 ************************************ 00:18:25.150 END TEST raid_state_function_test 00:18:25.150 ************************************ 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.150 13:12:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:25.150 13:12:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:25.150 13:12:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.150 13:12:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.150 ************************************ 00:18:25.150 START TEST raid_state_function_test_sb 00:18:25.150 ************************************ 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:25.150 Process raid pid: 74292 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74292 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74292' 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74292 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74292 ']' 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.150 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.150 [2024-12-06 13:12:31.499874] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:25.151 [2024-12-06 13:12:31.500388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.409 [2024-12-06 13:12:31.678030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.409 [2024-12-06 13:12:31.826010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.668 [2024-12-06 13:12:32.058298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.668 [2024-12-06 13:12:32.059091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.235 [2024-12-06 13:12:32.519125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.235 [2024-12-06 13:12:32.519602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.235 [2024-12-06 13:12:32.519641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.235 [2024-12-06 13:12:32.519671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.235 [2024-12-06 13:12:32.519689] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.235 [2024-12-06 13:12:32.519714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.235 [2024-12-06 13:12:32.519730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:26.235 [2024-12-06 13:12:32.519754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.235 "name": "Existed_Raid", 00:18:26.235 "uuid": "3eb11f2c-5082-4248-a244-79c2af95dfe2", 00:18:26.235 "strip_size_kb": 0, 00:18:26.235 "state": "configuring", 00:18:26.235 "raid_level": "raid1", 00:18:26.235 "superblock": true, 00:18:26.235 "num_base_bdevs": 4, 00:18:26.235 "num_base_bdevs_discovered": 0, 00:18:26.235 "num_base_bdevs_operational": 4, 00:18:26.235 "base_bdevs_list": [ 00:18:26.235 { 00:18:26.235 "name": "BaseBdev1", 00:18:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.235 "is_configured": false, 00:18:26.235 "data_offset": 0, 00:18:26.235 "data_size": 0 00:18:26.235 }, 00:18:26.235 { 00:18:26.235 "name": "BaseBdev2", 00:18:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.235 "is_configured": false, 00:18:26.235 "data_offset": 0, 00:18:26.235 "data_size": 0 00:18:26.235 }, 00:18:26.235 { 00:18:26.235 "name": "BaseBdev3", 00:18:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.235 "is_configured": false, 00:18:26.235 "data_offset": 0, 00:18:26.235 "data_size": 0 00:18:26.235 }, 00:18:26.235 { 00:18:26.235 "name": "BaseBdev4", 00:18:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.235 "is_configured": false, 00:18:26.235 "data_offset": 0, 00:18:26.235 "data_size": 0 00:18:26.235 } 00:18:26.235 ] 00:18:26.235 }' 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.235 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.832 [2024-12-06 13:12:33.063088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.832 [2024-12-06 13:12:33.063141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.832 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.832 [2024-12-06 13:12:33.075125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.832 [2024-12-06 13:12:33.075429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.832 [2024-12-06 13:12:33.075474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.832 [2024-12-06 13:12:33.075497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.833 [2024-12-06 13:12:33.075508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.833 [2024-12-06 13:12:33.075523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.833 [2024-12-06 13:12:33.075534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:26.833 [2024-12-06 13:12:33.075549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.833 [2024-12-06 13:12:33.125796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.833 BaseBdev1 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.833 [ 00:18:26.833 { 00:18:26.833 "name": "BaseBdev1", 00:18:26.833 "aliases": [ 00:18:26.833 "e16b9a15-e7e8-47b1-9362-f95129fe3a0b" 00:18:26.833 ], 00:18:26.833 "product_name": "Malloc disk", 00:18:26.833 "block_size": 512, 00:18:26.833 "num_blocks": 65536, 00:18:26.833 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:26.833 "assigned_rate_limits": { 00:18:26.833 "rw_ios_per_sec": 0, 00:18:26.833 "rw_mbytes_per_sec": 0, 00:18:26.833 "r_mbytes_per_sec": 0, 00:18:26.833 "w_mbytes_per_sec": 0 00:18:26.833 }, 00:18:26.833 "claimed": true, 00:18:26.833 "claim_type": "exclusive_write", 00:18:26.833 "zoned": false, 00:18:26.833 "supported_io_types": { 00:18:26.833 "read": true, 00:18:26.833 "write": true, 00:18:26.833 "unmap": true, 00:18:26.833 "flush": true, 00:18:26.833 "reset": true, 00:18:26.833 "nvme_admin": false, 00:18:26.833 "nvme_io": false, 00:18:26.833 "nvme_io_md": false, 00:18:26.833 "write_zeroes": true, 00:18:26.833 "zcopy": true, 00:18:26.833 "get_zone_info": false, 00:18:26.833 "zone_management": false, 00:18:26.833 "zone_append": false, 00:18:26.833 "compare": false, 00:18:26.833 "compare_and_write": false, 00:18:26.833 "abort": true, 00:18:26.833 "seek_hole": false, 00:18:26.833 "seek_data": false, 00:18:26.833 "copy": true, 00:18:26.833 "nvme_iov_md": false 00:18:26.833 }, 00:18:26.833 "memory_domains": [ 00:18:26.833 { 00:18:26.833 "dma_device_id": "system", 00:18:26.833 "dma_device_type": 1 00:18:26.833 }, 00:18:26.833 { 00:18:26.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.833 "dma_device_type": 2 00:18:26.833 } 00:18:26.833 ], 00:18:26.833 "driver_specific": {} 00:18:26.833 } 00:18:26.833 ] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.833 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.833 "name": "Existed_Raid", 00:18:26.833 "uuid": "18ec9fe0-6d7c-4cfe-be7f-8c6a043d3cd5", 00:18:26.833 "strip_size_kb": 0, 00:18:26.833 "state": "configuring", 00:18:26.833 "raid_level": "raid1", 00:18:26.833 "superblock": true, 00:18:26.833 "num_base_bdevs": 4, 00:18:26.833 "num_base_bdevs_discovered": 1, 00:18:26.833 "num_base_bdevs_operational": 4, 00:18:26.833 "base_bdevs_list": [ 00:18:26.834 { 00:18:26.834 "name": "BaseBdev1", 00:18:26.834 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:26.834 "is_configured": true, 00:18:26.834 "data_offset": 2048, 00:18:26.834 "data_size": 63488 00:18:26.834 }, 00:18:26.834 { 00:18:26.834 "name": "BaseBdev2", 00:18:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.834 "is_configured": false, 00:18:26.834 "data_offset": 0, 00:18:26.834 "data_size": 0 00:18:26.834 }, 00:18:26.834 { 00:18:26.834 "name": "BaseBdev3", 00:18:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.834 "is_configured": false, 00:18:26.834 "data_offset": 0, 00:18:26.834 "data_size": 0 00:18:26.834 }, 00:18:26.834 { 00:18:26.834 "name": "BaseBdev4", 00:18:26.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.834 "is_configured": false, 00:18:26.834 "data_offset": 0, 00:18:26.834 "data_size": 0 00:18:26.834 } 00:18:26.834 ] 00:18:26.834 }' 00:18:26.834 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.834 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.401 [2024-12-06 13:12:33.686155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.401 [2024-12-06 13:12:33.686498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.401 [2024-12-06 13:12:33.694178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.401 [2024-12-06 13:12:33.697161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.401 [2024-12-06 13:12:33.697367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.401 [2024-12-06 13:12:33.697396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.401 [2024-12-06 13:12:33.697418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.401 [2024-12-06 13:12:33.697429] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.401 [2024-12-06 13:12:33.697464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.401 "name": "Existed_Raid", 00:18:27.401 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:27.401 "strip_size_kb": 0, 00:18:27.401 "state": "configuring", 00:18:27.401 "raid_level": "raid1", 00:18:27.401 "superblock": true, 00:18:27.401 "num_base_bdevs": 4, 00:18:27.401 "num_base_bdevs_discovered": 1, 00:18:27.401 "num_base_bdevs_operational": 4, 00:18:27.401 "base_bdevs_list": [ 00:18:27.401 { 00:18:27.401 "name": "BaseBdev1", 00:18:27.401 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:27.401 "is_configured": true, 00:18:27.401 "data_offset": 2048, 00:18:27.401 "data_size": 63488 00:18:27.401 }, 00:18:27.401 { 00:18:27.401 "name": "BaseBdev2", 00:18:27.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.401 "is_configured": false, 00:18:27.401 "data_offset": 0, 00:18:27.401 "data_size": 0 00:18:27.401 }, 00:18:27.401 { 00:18:27.401 "name": "BaseBdev3", 00:18:27.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.401 "is_configured": false, 00:18:27.401 "data_offset": 0, 00:18:27.401 "data_size": 0 00:18:27.401 }, 00:18:27.401 { 00:18:27.401 "name": "BaseBdev4", 00:18:27.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.401 "is_configured": false, 00:18:27.401 "data_offset": 0, 00:18:27.401 "data_size": 0 00:18:27.401 } 00:18:27.401 ] 00:18:27.401 }' 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.401 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.043 BaseBdev2 00:18:28.043 [2024-12-06 13:12:34.260625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.043 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.044 [ 00:18:28.044 { 00:18:28.044 "name": "BaseBdev2", 00:18:28.044 "aliases": [ 00:18:28.044 "eff770c7-0629-4677-8913-f62b1aa10342" 00:18:28.044 ], 00:18:28.044 "product_name": "Malloc disk", 00:18:28.044 "block_size": 512, 00:18:28.044 "num_blocks": 65536, 00:18:28.044 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:28.044 "assigned_rate_limits": { 00:18:28.044 "rw_ios_per_sec": 0, 00:18:28.044 "rw_mbytes_per_sec": 0, 00:18:28.044 "r_mbytes_per_sec": 0, 00:18:28.044 "w_mbytes_per_sec": 0 00:18:28.044 }, 00:18:28.044 "claimed": true, 00:18:28.044 "claim_type": "exclusive_write", 00:18:28.044 "zoned": false, 00:18:28.044 "supported_io_types": { 00:18:28.044 "read": true, 00:18:28.044 "write": true, 00:18:28.044 "unmap": true, 00:18:28.044 "flush": true, 00:18:28.044 "reset": true, 00:18:28.044 "nvme_admin": false, 00:18:28.044 "nvme_io": false, 00:18:28.044 "nvme_io_md": false, 00:18:28.044 "write_zeroes": true, 00:18:28.044 "zcopy": true, 00:18:28.044 "get_zone_info": false, 00:18:28.044 "zone_management": false, 00:18:28.044 "zone_append": false, 00:18:28.044 "compare": false, 00:18:28.044 "compare_and_write": false, 00:18:28.044 "abort": true, 00:18:28.044 "seek_hole": false, 00:18:28.044 "seek_data": false, 00:18:28.044 "copy": true, 00:18:28.044 "nvme_iov_md": false 00:18:28.044 }, 00:18:28.044 "memory_domains": [ 00:18:28.044 { 00:18:28.044 "dma_device_id": "system", 00:18:28.044 "dma_device_type": 1 00:18:28.044 }, 00:18:28.044 { 00:18:28.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.044 "dma_device_type": 2 00:18:28.044 } 00:18:28.044 ], 00:18:28.044 "driver_specific": {} 00:18:28.044 } 00:18:28.044 ] 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.044 "name": "Existed_Raid", 00:18:28.044 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:28.044 "strip_size_kb": 0, 00:18:28.044 "state": "configuring", 00:18:28.044 "raid_level": "raid1", 00:18:28.044 "superblock": true, 00:18:28.044 "num_base_bdevs": 4, 00:18:28.044 "num_base_bdevs_discovered": 2, 00:18:28.044 "num_base_bdevs_operational": 4, 00:18:28.044 "base_bdevs_list": [ 00:18:28.044 { 00:18:28.044 "name": "BaseBdev1", 00:18:28.044 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:28.044 "is_configured": true, 00:18:28.044 "data_offset": 2048, 00:18:28.044 "data_size": 63488 00:18:28.044 }, 00:18:28.044 { 00:18:28.044 "name": "BaseBdev2", 00:18:28.044 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:28.044 "is_configured": true, 00:18:28.044 "data_offset": 2048, 00:18:28.044 "data_size": 63488 00:18:28.044 }, 00:18:28.044 { 00:18:28.044 "name": "BaseBdev3", 00:18:28.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.044 "is_configured": false, 00:18:28.044 "data_offset": 0, 00:18:28.044 "data_size": 0 00:18:28.044 }, 00:18:28.044 { 00:18:28.044 "name": "BaseBdev4", 00:18:28.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.044 "is_configured": false, 00:18:28.044 "data_offset": 0, 00:18:28.044 "data_size": 0 00:18:28.044 } 00:18:28.044 ] 00:18:28.044 }' 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.044 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.303 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.303 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.303 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.561 [2024-12-06 13:12:34.855957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.561 BaseBdev3 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.561 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.562 [ 00:18:28.562 { 00:18:28.562 "name": "BaseBdev3", 00:18:28.562 "aliases": [ 00:18:28.562 "5df72958-f8ef-47e7-a548-445fe08b0568" 00:18:28.562 ], 00:18:28.562 "product_name": "Malloc disk", 00:18:28.562 "block_size": 512, 00:18:28.562 "num_blocks": 65536, 00:18:28.562 "uuid": "5df72958-f8ef-47e7-a548-445fe08b0568", 00:18:28.562 "assigned_rate_limits": { 00:18:28.562 "rw_ios_per_sec": 0, 00:18:28.562 "rw_mbytes_per_sec": 0, 00:18:28.562 "r_mbytes_per_sec": 0, 00:18:28.562 "w_mbytes_per_sec": 0 00:18:28.562 }, 00:18:28.562 "claimed": true, 00:18:28.562 "claim_type": "exclusive_write", 00:18:28.562 "zoned": false, 00:18:28.562 "supported_io_types": { 00:18:28.562 "read": true, 00:18:28.562 "write": true, 00:18:28.562 "unmap": true, 00:18:28.562 "flush": true, 00:18:28.562 "reset": true, 00:18:28.562 "nvme_admin": false, 00:18:28.562 "nvme_io": false, 00:18:28.562 "nvme_io_md": false, 00:18:28.562 "write_zeroes": true, 00:18:28.562 "zcopy": true, 00:18:28.562 "get_zone_info": false, 00:18:28.562 "zone_management": false, 00:18:28.562 "zone_append": false, 00:18:28.562 "compare": false, 00:18:28.562 "compare_and_write": false, 00:18:28.562 "abort": true, 00:18:28.562 "seek_hole": false, 00:18:28.562 "seek_data": false, 00:18:28.562 "copy": true, 00:18:28.562 "nvme_iov_md": false 00:18:28.562 }, 00:18:28.562 "memory_domains": [ 00:18:28.562 { 00:18:28.562 "dma_device_id": "system", 00:18:28.562 "dma_device_type": 1 00:18:28.562 }, 00:18:28.562 { 00:18:28.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.562 "dma_device_type": 2 00:18:28.562 } 00:18:28.562 ], 00:18:28.562 "driver_specific": {} 00:18:28.562 } 00:18:28.562 ] 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.562 "name": "Existed_Raid", 00:18:28.562 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:28.562 "strip_size_kb": 0, 00:18:28.562 "state": "configuring", 00:18:28.562 "raid_level": "raid1", 00:18:28.562 "superblock": true, 00:18:28.562 "num_base_bdevs": 4, 00:18:28.562 "num_base_bdevs_discovered": 3, 00:18:28.562 "num_base_bdevs_operational": 4, 00:18:28.562 "base_bdevs_list": [ 00:18:28.562 { 00:18:28.562 "name": "BaseBdev1", 00:18:28.562 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:28.562 "is_configured": true, 00:18:28.562 "data_offset": 2048, 00:18:28.562 "data_size": 63488 00:18:28.562 }, 00:18:28.562 { 00:18:28.562 "name": "BaseBdev2", 00:18:28.562 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:28.562 "is_configured": true, 00:18:28.562 "data_offset": 2048, 00:18:28.562 "data_size": 63488 00:18:28.562 }, 00:18:28.562 { 00:18:28.562 "name": "BaseBdev3", 00:18:28.562 "uuid": "5df72958-f8ef-47e7-a548-445fe08b0568", 00:18:28.562 "is_configured": true, 00:18:28.562 "data_offset": 2048, 00:18:28.562 "data_size": 63488 00:18:28.562 }, 00:18:28.562 { 00:18:28.562 "name": "BaseBdev4", 00:18:28.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.562 "is_configured": false, 00:18:28.562 "data_offset": 0, 00:18:28.562 "data_size": 0 00:18:28.562 } 00:18:28.562 ] 00:18:28.562 }' 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.562 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.128 [2024-12-06 13:12:35.460424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.128 BaseBdev4 00:18:29.128 [2024-12-06 13:12:35.461025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:29.128 [2024-12-06 13:12:35.461063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.128 [2024-12-06 13:12:35.461575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:29.128 [2024-12-06 13:12:35.461873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:29.128 [2024-12-06 13:12:35.461903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.128 [2024-12-06 13:12:35.462200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.128 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.128 [ 00:18:29.128 { 00:18:29.128 "name": "BaseBdev4", 00:18:29.128 "aliases": [ 00:18:29.128 "716b7049-9a3e-44ea-abad-00085428a28c" 00:18:29.128 ], 00:18:29.128 "product_name": "Malloc disk", 00:18:29.128 "block_size": 512, 00:18:29.128 "num_blocks": 65536, 00:18:29.128 "uuid": "716b7049-9a3e-44ea-abad-00085428a28c", 00:18:29.128 "assigned_rate_limits": { 00:18:29.128 "rw_ios_per_sec": 0, 00:18:29.128 "rw_mbytes_per_sec": 0, 00:18:29.128 "r_mbytes_per_sec": 0, 00:18:29.128 "w_mbytes_per_sec": 0 00:18:29.128 }, 00:18:29.129 "claimed": true, 00:18:29.129 "claim_type": "exclusive_write", 00:18:29.129 "zoned": false, 00:18:29.129 "supported_io_types": { 00:18:29.129 "read": true, 00:18:29.129 "write": true, 00:18:29.129 "unmap": true, 00:18:29.129 "flush": true, 00:18:29.129 "reset": true, 00:18:29.129 "nvme_admin": false, 00:18:29.129 "nvme_io": false, 00:18:29.129 "nvme_io_md": false, 00:18:29.129 "write_zeroes": true, 00:18:29.129 "zcopy": true, 00:18:29.129 "get_zone_info": false, 00:18:29.129 "zone_management": false, 00:18:29.129 "zone_append": false, 00:18:29.129 "compare": false, 00:18:29.129 "compare_and_write": false, 00:18:29.129 "abort": true, 00:18:29.129 "seek_hole": false, 00:18:29.129 "seek_data": false, 00:18:29.129 "copy": true, 00:18:29.129 "nvme_iov_md": false 00:18:29.129 }, 00:18:29.129 "memory_domains": [ 00:18:29.129 { 00:18:29.129 "dma_device_id": "system", 00:18:29.129 "dma_device_type": 1 00:18:29.129 }, 00:18:29.129 { 00:18:29.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.129 "dma_device_type": 2 00:18:29.129 } 00:18:29.129 ], 00:18:29.129 "driver_specific": {} 00:18:29.129 } 00:18:29.129 ] 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.129 "name": "Existed_Raid", 00:18:29.129 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:29.129 "strip_size_kb": 0, 00:18:29.129 "state": "online", 00:18:29.129 "raid_level": "raid1", 00:18:29.129 "superblock": true, 00:18:29.129 "num_base_bdevs": 4, 00:18:29.129 "num_base_bdevs_discovered": 4, 00:18:29.129 "num_base_bdevs_operational": 4, 00:18:29.129 "base_bdevs_list": [ 00:18:29.129 { 00:18:29.129 "name": "BaseBdev1", 00:18:29.129 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:29.129 "is_configured": true, 00:18:29.129 "data_offset": 2048, 00:18:29.129 "data_size": 63488 00:18:29.129 }, 00:18:29.129 { 00:18:29.129 "name": "BaseBdev2", 00:18:29.129 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:29.129 "is_configured": true, 00:18:29.129 "data_offset": 2048, 00:18:29.129 "data_size": 63488 00:18:29.129 }, 00:18:29.129 { 00:18:29.129 "name": "BaseBdev3", 00:18:29.129 "uuid": "5df72958-f8ef-47e7-a548-445fe08b0568", 00:18:29.129 "is_configured": true, 00:18:29.129 "data_offset": 2048, 00:18:29.129 "data_size": 63488 00:18:29.129 }, 00:18:29.129 { 00:18:29.129 "name": "BaseBdev4", 00:18:29.129 "uuid": "716b7049-9a3e-44ea-abad-00085428a28c", 00:18:29.129 "is_configured": true, 00:18:29.129 "data_offset": 2048, 00:18:29.129 "data_size": 63488 00:18:29.129 } 00:18:29.129 ] 00:18:29.129 }' 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.129 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.696 [2024-12-06 13:12:36.025291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.696 "name": "Existed_Raid", 00:18:29.696 "aliases": [ 00:18:29.696 "3a8c0b7e-a772-4534-a98f-34022241fb0c" 00:18:29.696 ], 00:18:29.696 "product_name": "Raid Volume", 00:18:29.696 "block_size": 512, 00:18:29.696 "num_blocks": 63488, 00:18:29.696 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:29.696 "assigned_rate_limits": { 00:18:29.696 "rw_ios_per_sec": 0, 00:18:29.696 "rw_mbytes_per_sec": 0, 00:18:29.696 "r_mbytes_per_sec": 0, 00:18:29.696 "w_mbytes_per_sec": 0 00:18:29.696 }, 00:18:29.696 "claimed": false, 00:18:29.696 "zoned": false, 00:18:29.696 "supported_io_types": { 00:18:29.696 "read": true, 00:18:29.696 "write": true, 00:18:29.696 "unmap": false, 00:18:29.696 "flush": false, 00:18:29.696 "reset": true, 00:18:29.696 "nvme_admin": false, 00:18:29.696 "nvme_io": false, 00:18:29.696 "nvme_io_md": false, 00:18:29.696 "write_zeroes": true, 00:18:29.696 "zcopy": false, 00:18:29.696 "get_zone_info": false, 00:18:29.696 "zone_management": false, 00:18:29.696 "zone_append": false, 00:18:29.696 "compare": false, 00:18:29.696 "compare_and_write": false, 00:18:29.696 "abort": false, 00:18:29.696 "seek_hole": false, 00:18:29.696 "seek_data": false, 00:18:29.696 "copy": false, 00:18:29.696 "nvme_iov_md": false 00:18:29.696 }, 00:18:29.696 "memory_domains": [ 00:18:29.696 { 00:18:29.696 "dma_device_id": "system", 00:18:29.696 "dma_device_type": 1 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.696 "dma_device_type": 2 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "system", 00:18:29.696 "dma_device_type": 1 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.696 "dma_device_type": 2 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "system", 00:18:29.696 "dma_device_type": 1 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.696 "dma_device_type": 2 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "system", 00:18:29.696 "dma_device_type": 1 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.696 "dma_device_type": 2 00:18:29.696 } 00:18:29.696 ], 00:18:29.696 "driver_specific": { 00:18:29.696 "raid": { 00:18:29.696 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:29.696 "strip_size_kb": 0, 00:18:29.696 "state": "online", 00:18:29.696 "raid_level": "raid1", 00:18:29.696 "superblock": true, 00:18:29.696 "num_base_bdevs": 4, 00:18:29.696 "num_base_bdevs_discovered": 4, 00:18:29.696 "num_base_bdevs_operational": 4, 00:18:29.696 "base_bdevs_list": [ 00:18:29.696 { 00:18:29.696 "name": "BaseBdev1", 00:18:29.696 "uuid": "e16b9a15-e7e8-47b1-9362-f95129fe3a0b", 00:18:29.696 "is_configured": true, 00:18:29.696 "data_offset": 2048, 00:18:29.696 "data_size": 63488 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "name": "BaseBdev2", 00:18:29.696 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:29.696 "is_configured": true, 00:18:29.696 "data_offset": 2048, 00:18:29.696 "data_size": 63488 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "name": "BaseBdev3", 00:18:29.696 "uuid": "5df72958-f8ef-47e7-a548-445fe08b0568", 00:18:29.696 "is_configured": true, 00:18:29.696 "data_offset": 2048, 00:18:29.696 "data_size": 63488 00:18:29.696 }, 00:18:29.696 { 00:18:29.696 "name": "BaseBdev4", 00:18:29.696 "uuid": "716b7049-9a3e-44ea-abad-00085428a28c", 00:18:29.696 "is_configured": true, 00:18:29.696 "data_offset": 2048, 00:18:29.696 "data_size": 63488 00:18:29.696 } 00:18:29.696 ] 00:18:29.696 } 00:18:29.696 } 00:18:29.696 }' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:29.696 BaseBdev2 00:18:29.696 BaseBdev3 00:18:29.696 BaseBdev4' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.696 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.955 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.955 [2024-12-06 13:12:36.384925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.214 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.214 "name": "Existed_Raid", 00:18:30.214 "uuid": "3a8c0b7e-a772-4534-a98f-34022241fb0c", 00:18:30.214 "strip_size_kb": 0, 00:18:30.214 "state": "online", 00:18:30.214 "raid_level": "raid1", 00:18:30.214 "superblock": true, 00:18:30.214 "num_base_bdevs": 4, 00:18:30.214 "num_base_bdevs_discovered": 3, 00:18:30.214 "num_base_bdevs_operational": 3, 00:18:30.214 "base_bdevs_list": [ 00:18:30.214 { 00:18:30.214 "name": null, 00:18:30.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.214 "is_configured": false, 00:18:30.214 "data_offset": 0, 00:18:30.214 "data_size": 63488 00:18:30.214 }, 00:18:30.214 { 00:18:30.214 "name": "BaseBdev2", 00:18:30.214 "uuid": "eff770c7-0629-4677-8913-f62b1aa10342", 00:18:30.214 "is_configured": true, 00:18:30.215 "data_offset": 2048, 00:18:30.215 "data_size": 63488 00:18:30.215 }, 00:18:30.215 { 00:18:30.215 "name": "BaseBdev3", 00:18:30.215 "uuid": "5df72958-f8ef-47e7-a548-445fe08b0568", 00:18:30.215 "is_configured": true, 00:18:30.215 "data_offset": 2048, 00:18:30.215 "data_size": 63488 00:18:30.215 }, 00:18:30.215 { 00:18:30.215 "name": "BaseBdev4", 00:18:30.215 "uuid": "716b7049-9a3e-44ea-abad-00085428a28c", 00:18:30.215 "is_configured": true, 00:18:30.215 "data_offset": 2048, 00:18:30.215 "data_size": 63488 00:18:30.215 } 00:18:30.215 ] 00:18:30.215 }' 00:18:30.215 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.215 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.473 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.732 [2024-12-06 13:12:37.047376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.732 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.732 [2024-12-06 13:12:37.203340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.990 [2024-12-06 13:12:37.383578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:30.990 [2024-12-06 13:12:37.383917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.990 [2024-12-06 13:12:37.488154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.990 [2024-12-06 13:12:37.488681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.990 [2024-12-06 13:12:37.488730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.990 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 BaseBdev2 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 [ 00:18:31.248 { 00:18:31.248 "name": "BaseBdev2", 00:18:31.248 "aliases": [ 00:18:31.248 "fc5da49c-1ce4-4792-8efa-3d85043c1505" 00:18:31.248 ], 00:18:31.248 "product_name": "Malloc disk", 00:18:31.248 "block_size": 512, 00:18:31.248 "num_blocks": 65536, 00:18:31.248 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:31.248 "assigned_rate_limits": { 00:18:31.248 "rw_ios_per_sec": 0, 00:18:31.248 "rw_mbytes_per_sec": 0, 00:18:31.248 "r_mbytes_per_sec": 0, 00:18:31.248 "w_mbytes_per_sec": 0 00:18:31.248 }, 00:18:31.248 "claimed": false, 00:18:31.248 "zoned": false, 00:18:31.248 "supported_io_types": { 00:18:31.248 "read": true, 00:18:31.248 "write": true, 00:18:31.248 "unmap": true, 00:18:31.248 "flush": true, 00:18:31.248 "reset": true, 00:18:31.248 "nvme_admin": false, 00:18:31.248 "nvme_io": false, 00:18:31.248 "nvme_io_md": false, 00:18:31.248 "write_zeroes": true, 00:18:31.248 "zcopy": true, 00:18:31.248 "get_zone_info": false, 00:18:31.248 "zone_management": false, 00:18:31.248 "zone_append": false, 00:18:31.248 "compare": false, 00:18:31.248 "compare_and_write": false, 00:18:31.248 "abort": true, 00:18:31.248 "seek_hole": false, 00:18:31.248 "seek_data": false, 00:18:31.248 "copy": true, 00:18:31.248 "nvme_iov_md": false 00:18:31.248 }, 00:18:31.248 "memory_domains": [ 00:18:31.248 { 00:18:31.248 "dma_device_id": "system", 00:18:31.248 "dma_device_type": 1 00:18:31.248 }, 00:18:31.248 { 00:18:31.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.248 "dma_device_type": 2 00:18:31.248 } 00:18:31.248 ], 00:18:31.248 "driver_specific": {} 00:18:31.248 } 00:18:31.248 ] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 BaseBdev3 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.248 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.248 [ 00:18:31.248 { 00:18:31.248 "name": "BaseBdev3", 00:18:31.248 "aliases": [ 00:18:31.248 "5069327f-03a5-4011-8bf4-ae0fe73653d8" 00:18:31.248 ], 00:18:31.248 "product_name": "Malloc disk", 00:18:31.248 "block_size": 512, 00:18:31.248 "num_blocks": 65536, 00:18:31.248 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:31.248 "assigned_rate_limits": { 00:18:31.248 "rw_ios_per_sec": 0, 00:18:31.248 "rw_mbytes_per_sec": 0, 00:18:31.249 "r_mbytes_per_sec": 0, 00:18:31.249 "w_mbytes_per_sec": 0 00:18:31.249 }, 00:18:31.249 "claimed": false, 00:18:31.249 "zoned": false, 00:18:31.249 "supported_io_types": { 00:18:31.249 "read": true, 00:18:31.249 "write": true, 00:18:31.249 "unmap": true, 00:18:31.249 "flush": true, 00:18:31.249 "reset": true, 00:18:31.249 "nvme_admin": false, 00:18:31.249 "nvme_io": false, 00:18:31.249 "nvme_io_md": false, 00:18:31.249 "write_zeroes": true, 00:18:31.249 "zcopy": true, 00:18:31.249 "get_zone_info": false, 00:18:31.249 "zone_management": false, 00:18:31.249 "zone_append": false, 00:18:31.249 "compare": false, 00:18:31.249 "compare_and_write": false, 00:18:31.249 "abort": true, 00:18:31.249 "seek_hole": false, 00:18:31.249 "seek_data": false, 00:18:31.249 "copy": true, 00:18:31.249 "nvme_iov_md": false 00:18:31.249 }, 00:18:31.249 "memory_domains": [ 00:18:31.249 { 00:18:31.249 "dma_device_id": "system", 00:18:31.249 "dma_device_type": 1 00:18:31.249 }, 00:18:31.249 { 00:18:31.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.249 "dma_device_type": 2 00:18:31.249 } 00:18:31.249 ], 00:18:31.249 "driver_specific": {} 00:18:31.249 } 00:18:31.249 ] 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.249 BaseBdev4 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.249 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 [ 00:18:31.507 { 00:18:31.507 "name": "BaseBdev4", 00:18:31.507 "aliases": [ 00:18:31.507 "fa090116-9c73-45a6-80d3-3661577d121c" 00:18:31.507 ], 00:18:31.507 "product_name": "Malloc disk", 00:18:31.507 "block_size": 512, 00:18:31.507 "num_blocks": 65536, 00:18:31.507 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:31.507 "assigned_rate_limits": { 00:18:31.507 "rw_ios_per_sec": 0, 00:18:31.507 "rw_mbytes_per_sec": 0, 00:18:31.507 "r_mbytes_per_sec": 0, 00:18:31.507 "w_mbytes_per_sec": 0 00:18:31.507 }, 00:18:31.507 "claimed": false, 00:18:31.507 "zoned": false, 00:18:31.507 "supported_io_types": { 00:18:31.507 "read": true, 00:18:31.507 "write": true, 00:18:31.507 "unmap": true, 00:18:31.507 "flush": true, 00:18:31.507 "reset": true, 00:18:31.507 "nvme_admin": false, 00:18:31.507 "nvme_io": false, 00:18:31.507 "nvme_io_md": false, 00:18:31.507 "write_zeroes": true, 00:18:31.507 "zcopy": true, 00:18:31.507 "get_zone_info": false, 00:18:31.507 "zone_management": false, 00:18:31.507 "zone_append": false, 00:18:31.507 "compare": false, 00:18:31.507 "compare_and_write": false, 00:18:31.507 "abort": true, 00:18:31.507 "seek_hole": false, 00:18:31.507 "seek_data": false, 00:18:31.507 "copy": true, 00:18:31.507 "nvme_iov_md": false 00:18:31.507 }, 00:18:31.507 "memory_domains": [ 00:18:31.507 { 00:18:31.507 "dma_device_id": "system", 00:18:31.507 "dma_device_type": 1 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.507 "dma_device_type": 2 00:18:31.507 } 00:18:31.507 ], 00:18:31.507 "driver_specific": {} 00:18:31.507 } 00:18:31.507 ] 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 [2024-12-06 13:12:37.793227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.507 [2024-12-06 13:12:37.793480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.507 [2024-12-06 13:12:37.793630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.507 [2024-12-06 13:12:37.796549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.507 [2024-12-06 13:12:37.796623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.507 "name": "Existed_Raid", 00:18:31.507 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:31.507 "strip_size_kb": 0, 00:18:31.507 "state": "configuring", 00:18:31.507 "raid_level": "raid1", 00:18:31.507 "superblock": true, 00:18:31.507 "num_base_bdevs": 4, 00:18:31.507 "num_base_bdevs_discovered": 3, 00:18:31.507 "num_base_bdevs_operational": 4, 00:18:31.507 "base_bdevs_list": [ 00:18:31.507 { 00:18:31.507 "name": "BaseBdev1", 00:18:31.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.507 "is_configured": false, 00:18:31.507 "data_offset": 0, 00:18:31.507 "data_size": 0 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "name": "BaseBdev2", 00:18:31.507 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 2048, 00:18:31.507 "data_size": 63488 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "name": "BaseBdev3", 00:18:31.507 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 2048, 00:18:31.507 "data_size": 63488 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "name": "BaseBdev4", 00:18:31.507 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 2048, 00:18:31.507 "data_size": 63488 00:18:31.507 } 00:18:31.507 ] 00:18:31.507 }' 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.507 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.073 [2024-12-06 13:12:38.325398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.073 "name": "Existed_Raid", 00:18:32.073 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:32.073 "strip_size_kb": 0, 00:18:32.073 "state": "configuring", 00:18:32.073 "raid_level": "raid1", 00:18:32.073 "superblock": true, 00:18:32.073 "num_base_bdevs": 4, 00:18:32.073 "num_base_bdevs_discovered": 2, 00:18:32.073 "num_base_bdevs_operational": 4, 00:18:32.073 "base_bdevs_list": [ 00:18:32.073 { 00:18:32.073 "name": "BaseBdev1", 00:18:32.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.073 "is_configured": false, 00:18:32.073 "data_offset": 0, 00:18:32.073 "data_size": 0 00:18:32.073 }, 00:18:32.073 { 00:18:32.073 "name": null, 00:18:32.073 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:32.073 "is_configured": false, 00:18:32.073 "data_offset": 0, 00:18:32.073 "data_size": 63488 00:18:32.073 }, 00:18:32.073 { 00:18:32.073 "name": "BaseBdev3", 00:18:32.073 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:32.073 "is_configured": true, 00:18:32.073 "data_offset": 2048, 00:18:32.073 "data_size": 63488 00:18:32.073 }, 00:18:32.073 { 00:18:32.073 "name": "BaseBdev4", 00:18:32.073 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:32.073 "is_configured": true, 00:18:32.073 "data_offset": 2048, 00:18:32.073 "data_size": 63488 00:18:32.073 } 00:18:32.073 ] 00:18:32.073 }' 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.073 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.332 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.332 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:32.332 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.332 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.591 [2024-12-06 13:12:38.944380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.591 BaseBdev1 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.591 [ 00:18:32.591 { 00:18:32.591 "name": "BaseBdev1", 00:18:32.591 "aliases": [ 00:18:32.591 "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1" 00:18:32.591 ], 00:18:32.591 "product_name": "Malloc disk", 00:18:32.591 "block_size": 512, 00:18:32.591 "num_blocks": 65536, 00:18:32.591 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:32.591 "assigned_rate_limits": { 00:18:32.591 "rw_ios_per_sec": 0, 00:18:32.591 "rw_mbytes_per_sec": 0, 00:18:32.591 "r_mbytes_per_sec": 0, 00:18:32.591 "w_mbytes_per_sec": 0 00:18:32.591 }, 00:18:32.591 "claimed": true, 00:18:32.591 "claim_type": "exclusive_write", 00:18:32.591 "zoned": false, 00:18:32.591 "supported_io_types": { 00:18:32.591 "read": true, 00:18:32.591 "write": true, 00:18:32.591 "unmap": true, 00:18:32.591 "flush": true, 00:18:32.591 "reset": true, 00:18:32.591 "nvme_admin": false, 00:18:32.591 "nvme_io": false, 00:18:32.591 "nvme_io_md": false, 00:18:32.591 "write_zeroes": true, 00:18:32.591 "zcopy": true, 00:18:32.591 "get_zone_info": false, 00:18:32.591 "zone_management": false, 00:18:32.591 "zone_append": false, 00:18:32.591 "compare": false, 00:18:32.591 "compare_and_write": false, 00:18:32.591 "abort": true, 00:18:32.591 "seek_hole": false, 00:18:32.591 "seek_data": false, 00:18:32.591 "copy": true, 00:18:32.591 "nvme_iov_md": false 00:18:32.591 }, 00:18:32.591 "memory_domains": [ 00:18:32.591 { 00:18:32.591 "dma_device_id": "system", 00:18:32.591 "dma_device_type": 1 00:18:32.591 }, 00:18:32.591 { 00:18:32.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.591 "dma_device_type": 2 00:18:32.591 } 00:18:32.591 ], 00:18:32.591 "driver_specific": {} 00:18:32.591 } 00:18:32.591 ] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.591 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.591 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.591 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.591 "name": "Existed_Raid", 00:18:32.591 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:32.591 "strip_size_kb": 0, 00:18:32.591 "state": "configuring", 00:18:32.592 "raid_level": "raid1", 00:18:32.592 "superblock": true, 00:18:32.592 "num_base_bdevs": 4, 00:18:32.592 "num_base_bdevs_discovered": 3, 00:18:32.592 "num_base_bdevs_operational": 4, 00:18:32.592 "base_bdevs_list": [ 00:18:32.592 { 00:18:32.592 "name": "BaseBdev1", 00:18:32.592 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:32.592 "is_configured": true, 00:18:32.592 "data_offset": 2048, 00:18:32.592 "data_size": 63488 00:18:32.592 }, 00:18:32.592 { 00:18:32.592 "name": null, 00:18:32.592 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:32.592 "is_configured": false, 00:18:32.592 "data_offset": 0, 00:18:32.592 "data_size": 63488 00:18:32.592 }, 00:18:32.592 { 00:18:32.592 "name": "BaseBdev3", 00:18:32.592 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:32.592 "is_configured": true, 00:18:32.592 "data_offset": 2048, 00:18:32.592 "data_size": 63488 00:18:32.592 }, 00:18:32.592 { 00:18:32.592 "name": "BaseBdev4", 00:18:32.592 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:32.592 "is_configured": true, 00:18:32.592 "data_offset": 2048, 00:18:32.592 "data_size": 63488 00:18:32.592 } 00:18:32.592 ] 00:18:32.592 }' 00:18:32.592 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.592 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.158 [2024-12-06 13:12:39.568807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.158 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.158 "name": "Existed_Raid", 00:18:33.158 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:33.158 "strip_size_kb": 0, 00:18:33.158 "state": "configuring", 00:18:33.158 "raid_level": "raid1", 00:18:33.158 "superblock": true, 00:18:33.159 "num_base_bdevs": 4, 00:18:33.159 "num_base_bdevs_discovered": 2, 00:18:33.159 "num_base_bdevs_operational": 4, 00:18:33.159 "base_bdevs_list": [ 00:18:33.159 { 00:18:33.159 "name": "BaseBdev1", 00:18:33.159 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:33.159 "is_configured": true, 00:18:33.159 "data_offset": 2048, 00:18:33.159 "data_size": 63488 00:18:33.159 }, 00:18:33.159 { 00:18:33.159 "name": null, 00:18:33.159 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:33.159 "is_configured": false, 00:18:33.159 "data_offset": 0, 00:18:33.159 "data_size": 63488 00:18:33.159 }, 00:18:33.159 { 00:18:33.159 "name": null, 00:18:33.159 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:33.159 "is_configured": false, 00:18:33.159 "data_offset": 0, 00:18:33.159 "data_size": 63488 00:18:33.159 }, 00:18:33.159 { 00:18:33.159 "name": "BaseBdev4", 00:18:33.159 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:33.159 "is_configured": true, 00:18:33.159 "data_offset": 2048, 00:18:33.159 "data_size": 63488 00:18:33.159 } 00:18:33.159 ] 00:18:33.159 }' 00:18:33.159 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.159 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.726 [2024-12-06 13:12:40.144926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.726 "name": "Existed_Raid", 00:18:33.726 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:33.726 "strip_size_kb": 0, 00:18:33.726 "state": "configuring", 00:18:33.726 "raid_level": "raid1", 00:18:33.726 "superblock": true, 00:18:33.726 "num_base_bdevs": 4, 00:18:33.726 "num_base_bdevs_discovered": 3, 00:18:33.726 "num_base_bdevs_operational": 4, 00:18:33.726 "base_bdevs_list": [ 00:18:33.726 { 00:18:33.726 "name": "BaseBdev1", 00:18:33.726 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:33.726 "is_configured": true, 00:18:33.726 "data_offset": 2048, 00:18:33.726 "data_size": 63488 00:18:33.726 }, 00:18:33.726 { 00:18:33.726 "name": null, 00:18:33.726 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:33.726 "is_configured": false, 00:18:33.726 "data_offset": 0, 00:18:33.726 "data_size": 63488 00:18:33.726 }, 00:18:33.726 { 00:18:33.726 "name": "BaseBdev3", 00:18:33.726 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:33.726 "is_configured": true, 00:18:33.726 "data_offset": 2048, 00:18:33.726 "data_size": 63488 00:18:33.726 }, 00:18:33.726 { 00:18:33.726 "name": "BaseBdev4", 00:18:33.726 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:33.726 "is_configured": true, 00:18:33.726 "data_offset": 2048, 00:18:33.726 "data_size": 63488 00:18:33.726 } 00:18:33.726 ] 00:18:33.726 }' 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.726 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.293 [2024-12-06 13:12:40.713174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.293 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.552 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.552 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.552 "name": "Existed_Raid", 00:18:34.552 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:34.552 "strip_size_kb": 0, 00:18:34.552 "state": "configuring", 00:18:34.552 "raid_level": "raid1", 00:18:34.552 "superblock": true, 00:18:34.552 "num_base_bdevs": 4, 00:18:34.552 "num_base_bdevs_discovered": 2, 00:18:34.552 "num_base_bdevs_operational": 4, 00:18:34.552 "base_bdevs_list": [ 00:18:34.552 { 00:18:34.552 "name": null, 00:18:34.552 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:34.552 "is_configured": false, 00:18:34.552 "data_offset": 0, 00:18:34.552 "data_size": 63488 00:18:34.552 }, 00:18:34.552 { 00:18:34.552 "name": null, 00:18:34.552 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:34.552 "is_configured": false, 00:18:34.552 "data_offset": 0, 00:18:34.552 "data_size": 63488 00:18:34.552 }, 00:18:34.552 { 00:18:34.552 "name": "BaseBdev3", 00:18:34.552 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:34.552 "is_configured": true, 00:18:34.552 "data_offset": 2048, 00:18:34.552 "data_size": 63488 00:18:34.552 }, 00:18:34.552 { 00:18:34.552 "name": "BaseBdev4", 00:18:34.552 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:34.552 "is_configured": true, 00:18:34.552 "data_offset": 2048, 00:18:34.552 "data_size": 63488 00:18:34.552 } 00:18:34.552 ] 00:18:34.552 }' 00:18:34.552 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.552 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.120 [2024-12-06 13:12:41.411970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.120 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.121 "name": "Existed_Raid", 00:18:35.121 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:35.121 "strip_size_kb": 0, 00:18:35.121 "state": "configuring", 00:18:35.121 "raid_level": "raid1", 00:18:35.121 "superblock": true, 00:18:35.121 "num_base_bdevs": 4, 00:18:35.121 "num_base_bdevs_discovered": 3, 00:18:35.121 "num_base_bdevs_operational": 4, 00:18:35.121 "base_bdevs_list": [ 00:18:35.121 { 00:18:35.121 "name": null, 00:18:35.121 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:35.121 "is_configured": false, 00:18:35.121 "data_offset": 0, 00:18:35.121 "data_size": 63488 00:18:35.121 }, 00:18:35.121 { 00:18:35.121 "name": "BaseBdev2", 00:18:35.121 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:35.121 "is_configured": true, 00:18:35.121 "data_offset": 2048, 00:18:35.121 "data_size": 63488 00:18:35.121 }, 00:18:35.121 { 00:18:35.121 "name": "BaseBdev3", 00:18:35.121 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:35.121 "is_configured": true, 00:18:35.121 "data_offset": 2048, 00:18:35.121 "data_size": 63488 00:18:35.121 }, 00:18:35.121 { 00:18:35.121 "name": "BaseBdev4", 00:18:35.121 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:35.121 "is_configured": true, 00:18:35.121 "data_offset": 2048, 00:18:35.121 "data_size": 63488 00:18:35.121 } 00:18:35.121 ] 00:18:35.121 }' 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.121 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:35.689 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 [2024-12-06 13:12:42.061477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:35.689 [2024-12-06 13:12:42.062139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:35.689 [2024-12-06 13:12:42.062171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:35.689 [2024-12-06 13:12:42.062574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:35.689 NewBaseBdev 00:18:35.689 [2024-12-06 13:12:42.062810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:35.689 [2024-12-06 13:12:42.062827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:35.689 [2024-12-06 13:12:42.063061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 [ 00:18:35.689 { 00:18:35.689 "name": "NewBaseBdev", 00:18:35.689 "aliases": [ 00:18:35.689 "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1" 00:18:35.689 ], 00:18:35.689 "product_name": "Malloc disk", 00:18:35.689 "block_size": 512, 00:18:35.689 "num_blocks": 65536, 00:18:35.689 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:35.689 "assigned_rate_limits": { 00:18:35.689 "rw_ios_per_sec": 0, 00:18:35.689 "rw_mbytes_per_sec": 0, 00:18:35.689 "r_mbytes_per_sec": 0, 00:18:35.689 "w_mbytes_per_sec": 0 00:18:35.689 }, 00:18:35.689 "claimed": true, 00:18:35.689 "claim_type": "exclusive_write", 00:18:35.689 "zoned": false, 00:18:35.689 "supported_io_types": { 00:18:35.689 "read": true, 00:18:35.689 "write": true, 00:18:35.689 "unmap": true, 00:18:35.689 "flush": true, 00:18:35.689 "reset": true, 00:18:35.689 "nvme_admin": false, 00:18:35.689 "nvme_io": false, 00:18:35.689 "nvme_io_md": false, 00:18:35.689 "write_zeroes": true, 00:18:35.689 "zcopy": true, 00:18:35.689 "get_zone_info": false, 00:18:35.689 "zone_management": false, 00:18:35.689 "zone_append": false, 00:18:35.689 "compare": false, 00:18:35.689 "compare_and_write": false, 00:18:35.689 "abort": true, 00:18:35.689 "seek_hole": false, 00:18:35.689 "seek_data": false, 00:18:35.689 "copy": true, 00:18:35.689 "nvme_iov_md": false 00:18:35.689 }, 00:18:35.689 "memory_domains": [ 00:18:35.689 { 00:18:35.689 "dma_device_id": "system", 00:18:35.689 "dma_device_type": 1 00:18:35.689 }, 00:18:35.689 { 00:18:35.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.689 "dma_device_type": 2 00:18:35.689 } 00:18:35.689 ], 00:18:35.689 "driver_specific": {} 00:18:35.689 } 00:18:35.689 ] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.689 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.689 "name": "Existed_Raid", 00:18:35.689 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:35.689 "strip_size_kb": 0, 00:18:35.689 "state": "online", 00:18:35.689 "raid_level": "raid1", 00:18:35.689 "superblock": true, 00:18:35.689 "num_base_bdevs": 4, 00:18:35.689 "num_base_bdevs_discovered": 4, 00:18:35.689 "num_base_bdevs_operational": 4, 00:18:35.689 "base_bdevs_list": [ 00:18:35.689 { 00:18:35.689 "name": "NewBaseBdev", 00:18:35.689 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:35.689 "is_configured": true, 00:18:35.689 "data_offset": 2048, 00:18:35.689 "data_size": 63488 00:18:35.689 }, 00:18:35.689 { 00:18:35.689 "name": "BaseBdev2", 00:18:35.690 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:35.690 "is_configured": true, 00:18:35.690 "data_offset": 2048, 00:18:35.690 "data_size": 63488 00:18:35.690 }, 00:18:35.690 { 00:18:35.690 "name": "BaseBdev3", 00:18:35.690 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:35.690 "is_configured": true, 00:18:35.690 "data_offset": 2048, 00:18:35.690 "data_size": 63488 00:18:35.690 }, 00:18:35.690 { 00:18:35.690 "name": "BaseBdev4", 00:18:35.690 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:35.690 "is_configured": true, 00:18:35.690 "data_offset": 2048, 00:18:35.690 "data_size": 63488 00:18:35.690 } 00:18:35.690 ] 00:18:35.690 }' 00:18:35.690 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.690 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.324 [2024-12-06 13:12:42.618235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.324 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.324 "name": "Existed_Raid", 00:18:36.324 "aliases": [ 00:18:36.324 "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e" 00:18:36.324 ], 00:18:36.324 "product_name": "Raid Volume", 00:18:36.324 "block_size": 512, 00:18:36.324 "num_blocks": 63488, 00:18:36.324 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:36.324 "assigned_rate_limits": { 00:18:36.324 "rw_ios_per_sec": 0, 00:18:36.324 "rw_mbytes_per_sec": 0, 00:18:36.324 "r_mbytes_per_sec": 0, 00:18:36.324 "w_mbytes_per_sec": 0 00:18:36.324 }, 00:18:36.324 "claimed": false, 00:18:36.324 "zoned": false, 00:18:36.324 "supported_io_types": { 00:18:36.324 "read": true, 00:18:36.324 "write": true, 00:18:36.324 "unmap": false, 00:18:36.324 "flush": false, 00:18:36.324 "reset": true, 00:18:36.324 "nvme_admin": false, 00:18:36.324 "nvme_io": false, 00:18:36.324 "nvme_io_md": false, 00:18:36.324 "write_zeroes": true, 00:18:36.324 "zcopy": false, 00:18:36.324 "get_zone_info": false, 00:18:36.324 "zone_management": false, 00:18:36.324 "zone_append": false, 00:18:36.324 "compare": false, 00:18:36.324 "compare_and_write": false, 00:18:36.324 "abort": false, 00:18:36.324 "seek_hole": false, 00:18:36.324 "seek_data": false, 00:18:36.325 "copy": false, 00:18:36.325 "nvme_iov_md": false 00:18:36.325 }, 00:18:36.325 "memory_domains": [ 00:18:36.325 { 00:18:36.325 "dma_device_id": "system", 00:18:36.325 "dma_device_type": 1 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.325 "dma_device_type": 2 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "system", 00:18:36.325 "dma_device_type": 1 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.325 "dma_device_type": 2 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "system", 00:18:36.325 "dma_device_type": 1 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.325 "dma_device_type": 2 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "system", 00:18:36.325 "dma_device_type": 1 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.325 "dma_device_type": 2 00:18:36.325 } 00:18:36.325 ], 00:18:36.325 "driver_specific": { 00:18:36.325 "raid": { 00:18:36.325 "uuid": "f72537e6-b3d0-4fe7-b4f7-8b81724ed76e", 00:18:36.325 "strip_size_kb": 0, 00:18:36.325 "state": "online", 00:18:36.325 "raid_level": "raid1", 00:18:36.325 "superblock": true, 00:18:36.325 "num_base_bdevs": 4, 00:18:36.325 "num_base_bdevs_discovered": 4, 00:18:36.325 "num_base_bdevs_operational": 4, 00:18:36.325 "base_bdevs_list": [ 00:18:36.325 { 00:18:36.325 "name": "NewBaseBdev", 00:18:36.325 "uuid": "3e9d17c4-aa1f-44a8-813f-7fd37ad38ce1", 00:18:36.325 "is_configured": true, 00:18:36.325 "data_offset": 2048, 00:18:36.325 "data_size": 63488 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "name": "BaseBdev2", 00:18:36.325 "uuid": "fc5da49c-1ce4-4792-8efa-3d85043c1505", 00:18:36.325 "is_configured": true, 00:18:36.325 "data_offset": 2048, 00:18:36.325 "data_size": 63488 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "name": "BaseBdev3", 00:18:36.325 "uuid": "5069327f-03a5-4011-8bf4-ae0fe73653d8", 00:18:36.325 "is_configured": true, 00:18:36.325 "data_offset": 2048, 00:18:36.325 "data_size": 63488 00:18:36.325 }, 00:18:36.325 { 00:18:36.325 "name": "BaseBdev4", 00:18:36.325 "uuid": "fa090116-9c73-45a6-80d3-3661577d121c", 00:18:36.325 "is_configured": true, 00:18:36.325 "data_offset": 2048, 00:18:36.325 "data_size": 63488 00:18:36.325 } 00:18:36.325 ] 00:18:36.325 } 00:18:36.325 } 00:18:36.325 }' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:36.325 BaseBdev2 00:18:36.325 BaseBdev3 00:18:36.325 BaseBdev4' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.325 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.583 [2024-12-06 13:12:42.977813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:36.583 [2024-12-06 13:12:42.978005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.583 [2024-12-06 13:12:42.978255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.583 [2024-12-06 13:12:42.978797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.583 [2024-12-06 13:12:42.978935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74292 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74292 ']' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74292 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.583 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74292 00:18:36.584 killing process with pid 74292 00:18:36.584 13:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.584 13:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.584 13:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74292' 00:18:36.584 13:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74292 00:18:36.584 [2024-12-06 13:12:43.013715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.584 13:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74292 00:18:37.149 [2024-12-06 13:12:43.377998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.118 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:38.118 00:18:38.118 real 0m13.167s 00:18:38.118 user 0m21.529s 00:18:38.118 sys 0m1.975s 00:18:38.118 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.118 ************************************ 00:18:38.118 END TEST raid_state_function_test_sb 00:18:38.118 ************************************ 00:18:38.118 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.118 13:12:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:38.118 13:12:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:38.118 13:12:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.118 13:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.118 ************************************ 00:18:38.118 START TEST raid_superblock_test 00:18:38.118 ************************************ 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74978 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74978 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74978 ']' 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.118 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.376 [2024-12-06 13:12:44.693537] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:38.376 [2024-12-06 13:12:44.693754] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74978 ] 00:18:38.376 [2024-12-06 13:12:44.876283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.633 [2024-12-06 13:12:45.018287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.932 [2024-12-06 13:12:45.235763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.932 [2024-12-06 13:12:45.235814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.190 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.448 malloc1 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.448 [2024-12-06 13:12:45.740935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.448 [2024-12-06 13:12:45.741017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.448 [2024-12-06 13:12:45.741051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:39.448 [2024-12-06 13:12:45.741067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.448 [2024-12-06 13:12:45.743873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.448 [2024-12-06 13:12:45.743913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.448 pt1 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.448 malloc2 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.448 [2024-12-06 13:12:45.797516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.448 [2024-12-06 13:12:45.797627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.448 [2024-12-06 13:12:45.797664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:39.448 [2024-12-06 13:12:45.797695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.448 [2024-12-06 13:12:45.800525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.448 [2024-12-06 13:12:45.800566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.448 pt2 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.448 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 malloc3 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-12-06 13:12:45.865140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:39.449 [2024-12-06 13:12:45.865211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.449 [2024-12-06 13:12:45.865245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:39.449 [2024-12-06 13:12:45.865261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.449 [2024-12-06 13:12:45.868238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.449 [2024-12-06 13:12:45.868310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:39.449 pt3 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 malloc4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-12-06 13:12:45.919571] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:39.449 [2024-12-06 13:12:45.919665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.449 [2024-12-06 13:12:45.919697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:39.449 [2024-12-06 13:12:45.919712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.449 [2024-12-06 13:12:45.923422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.449 [2024-12-06 13:12:45.923541] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:39.449 pt4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 [2024-12-06 13:12:45.927792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:39.449 [2024-12-06 13:12:45.930451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.449 [2024-12-06 13:12:45.930606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:39.449 [2024-12-06 13:12:45.930697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:39.449 [2024-12-06 13:12:45.930968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:39.449 [2024-12-06 13:12:45.931020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:39.449 [2024-12-06 13:12:45.931374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:39.449 [2024-12-06 13:12:45.931619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:39.449 [2024-12-06 13:12:45.931649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:39.449 [2024-12-06 13:12:45.931866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.449 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.706 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.706 "name": "raid_bdev1", 00:18:39.706 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:39.706 "strip_size_kb": 0, 00:18:39.706 "state": "online", 00:18:39.706 "raid_level": "raid1", 00:18:39.706 "superblock": true, 00:18:39.706 "num_base_bdevs": 4, 00:18:39.706 "num_base_bdevs_discovered": 4, 00:18:39.706 "num_base_bdevs_operational": 4, 00:18:39.706 "base_bdevs_list": [ 00:18:39.706 { 00:18:39.706 "name": "pt1", 00:18:39.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.706 "is_configured": true, 00:18:39.706 "data_offset": 2048, 00:18:39.706 "data_size": 63488 00:18:39.706 }, 00:18:39.706 { 00:18:39.707 "name": "pt2", 00:18:39.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.707 "is_configured": true, 00:18:39.707 "data_offset": 2048, 00:18:39.707 "data_size": 63488 00:18:39.707 }, 00:18:39.707 { 00:18:39.707 "name": "pt3", 00:18:39.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.707 "is_configured": true, 00:18:39.707 "data_offset": 2048, 00:18:39.707 "data_size": 63488 00:18:39.707 }, 00:18:39.707 { 00:18:39.707 "name": "pt4", 00:18:39.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:39.707 "is_configured": true, 00:18:39.707 "data_offset": 2048, 00:18:39.707 "data_size": 63488 00:18:39.707 } 00:18:39.707 ] 00:18:39.707 }' 00:18:39.707 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.707 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.964 [2024-12-06 13:12:46.448570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.964 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.222 "name": "raid_bdev1", 00:18:40.222 "aliases": [ 00:18:40.222 "26d92589-7288-4f91-8d67-e6ab2ed4b0bb" 00:18:40.222 ], 00:18:40.222 "product_name": "Raid Volume", 00:18:40.222 "block_size": 512, 00:18:40.222 "num_blocks": 63488, 00:18:40.222 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:40.222 "assigned_rate_limits": { 00:18:40.222 "rw_ios_per_sec": 0, 00:18:40.222 "rw_mbytes_per_sec": 0, 00:18:40.222 "r_mbytes_per_sec": 0, 00:18:40.222 "w_mbytes_per_sec": 0 00:18:40.222 }, 00:18:40.222 "claimed": false, 00:18:40.222 "zoned": false, 00:18:40.222 "supported_io_types": { 00:18:40.222 "read": true, 00:18:40.222 "write": true, 00:18:40.222 "unmap": false, 00:18:40.222 "flush": false, 00:18:40.222 "reset": true, 00:18:40.222 "nvme_admin": false, 00:18:40.222 "nvme_io": false, 00:18:40.222 "nvme_io_md": false, 00:18:40.222 "write_zeroes": true, 00:18:40.222 "zcopy": false, 00:18:40.222 "get_zone_info": false, 00:18:40.222 "zone_management": false, 00:18:40.222 "zone_append": false, 00:18:40.222 "compare": false, 00:18:40.222 "compare_and_write": false, 00:18:40.222 "abort": false, 00:18:40.222 "seek_hole": false, 00:18:40.222 "seek_data": false, 00:18:40.222 "copy": false, 00:18:40.222 "nvme_iov_md": false 00:18:40.222 }, 00:18:40.222 "memory_domains": [ 00:18:40.222 { 00:18:40.222 "dma_device_id": "system", 00:18:40.222 "dma_device_type": 1 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.222 "dma_device_type": 2 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "system", 00:18:40.222 "dma_device_type": 1 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.222 "dma_device_type": 2 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "system", 00:18:40.222 "dma_device_type": 1 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.222 "dma_device_type": 2 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "system", 00:18:40.222 "dma_device_type": 1 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.222 "dma_device_type": 2 00:18:40.222 } 00:18:40.222 ], 00:18:40.222 "driver_specific": { 00:18:40.222 "raid": { 00:18:40.222 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:40.222 "strip_size_kb": 0, 00:18:40.222 "state": "online", 00:18:40.222 "raid_level": "raid1", 00:18:40.222 "superblock": true, 00:18:40.222 "num_base_bdevs": 4, 00:18:40.222 "num_base_bdevs_discovered": 4, 00:18:40.222 "num_base_bdevs_operational": 4, 00:18:40.222 "base_bdevs_list": [ 00:18:40.222 { 00:18:40.222 "name": "pt1", 00:18:40.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.222 "is_configured": true, 00:18:40.222 "data_offset": 2048, 00:18:40.222 "data_size": 63488 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "name": "pt2", 00:18:40.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.222 "is_configured": true, 00:18:40.222 "data_offset": 2048, 00:18:40.222 "data_size": 63488 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "name": "pt3", 00:18:40.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.222 "is_configured": true, 00:18:40.222 "data_offset": 2048, 00:18:40.222 "data_size": 63488 00:18:40.222 }, 00:18:40.222 { 00:18:40.222 "name": "pt4", 00:18:40.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:40.222 "is_configured": true, 00:18:40.222 "data_offset": 2048, 00:18:40.222 "data_size": 63488 00:18:40.222 } 00:18:40.222 ] 00:18:40.222 } 00:18:40.222 } 00:18:40.222 }' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:40.222 pt2 00:18:40.222 pt3 00:18:40.222 pt4' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.222 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.223 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.480 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:40.480 [2024-12-06 13:12:46.844663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=26d92589-7288-4f91-8d67-e6ab2ed4b0bb 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 26d92589-7288-4f91-8d67-e6ab2ed4b0bb ']' 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 [2024-12-06 13:12:46.896218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.481 [2024-12-06 13:12:46.896268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.481 [2024-12-06 13:12:46.896430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.481 [2024-12-06 13:12:46.896604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.481 [2024-12-06 13:12:46.896639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.481 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.739 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.739 [2024-12-06 13:12:47.048254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.739 [2024-12-06 13:12:47.051143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.739 [2024-12-06 13:12:47.051213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:40.739 [2024-12-06 13:12:47.051271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:40.739 [2024-12-06 13:12:47.051392] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:40.739 [2024-12-06 13:12:47.051516] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:40.739 [2024-12-06 13:12:47.051552] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:40.739 [2024-12-06 13:12:47.051583] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:40.739 [2024-12-06 13:12:47.051604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.739 [2024-12-06 13:12:47.051621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:40.739 request: 00:18:40.739 { 00:18:40.739 "name": "raid_bdev1", 00:18:40.739 "raid_level": "raid1", 00:18:40.739 "base_bdevs": [ 00:18:40.739 "malloc1", 00:18:40.739 "malloc2", 00:18:40.739 "malloc3", 00:18:40.739 "malloc4" 00:18:40.739 ], 00:18:40.739 "superblock": false, 00:18:40.739 "method": "bdev_raid_create", 00:18:40.739 "req_id": 1 00:18:40.739 } 00:18:40.739 Got JSON-RPC error response 00:18:40.739 response: 00:18:40.739 { 00:18:40.739 "code": -17, 00:18:40.740 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.740 } 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 [2024-12-06 13:12:47.108427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.740 [2024-12-06 13:12:47.108519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.740 [2024-12-06 13:12:47.108543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:40.740 [2024-12-06 13:12:47.108561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.740 [2024-12-06 13:12:47.111704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.740 [2024-12-06 13:12:47.111752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.740 [2024-12-06 13:12:47.111839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.740 [2024-12-06 13:12:47.111918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.740 pt1 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.740 "name": "raid_bdev1", 00:18:40.740 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:40.740 "strip_size_kb": 0, 00:18:40.740 "state": "configuring", 00:18:40.740 "raid_level": "raid1", 00:18:40.740 "superblock": true, 00:18:40.740 "num_base_bdevs": 4, 00:18:40.740 "num_base_bdevs_discovered": 1, 00:18:40.740 "num_base_bdevs_operational": 4, 00:18:40.740 "base_bdevs_list": [ 00:18:40.740 { 00:18:40.740 "name": "pt1", 00:18:40.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.740 "is_configured": true, 00:18:40.740 "data_offset": 2048, 00:18:40.740 "data_size": 63488 00:18:40.740 }, 00:18:40.740 { 00:18:40.740 "name": null, 00:18:40.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.740 "is_configured": false, 00:18:40.740 "data_offset": 2048, 00:18:40.740 "data_size": 63488 00:18:40.740 }, 00:18:40.740 { 00:18:40.740 "name": null, 00:18:40.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.740 "is_configured": false, 00:18:40.740 "data_offset": 2048, 00:18:40.740 "data_size": 63488 00:18:40.740 }, 00:18:40.740 { 00:18:40.740 "name": null, 00:18:40.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:40.740 "is_configured": false, 00:18:40.740 "data_offset": 2048, 00:18:40.740 "data_size": 63488 00:18:40.740 } 00:18:40.740 ] 00:18:40.740 }' 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.740 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.307 [2024-12-06 13:12:47.636732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.307 [2024-12-06 13:12:47.636855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.307 [2024-12-06 13:12:47.636891] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:41.307 [2024-12-06 13:12:47.636911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.307 [2024-12-06 13:12:47.637550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.307 [2024-12-06 13:12:47.637595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.307 [2024-12-06 13:12:47.637706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.307 [2024-12-06 13:12:47.637748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.307 pt2 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.307 [2024-12-06 13:12:47.644699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.307 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.307 "name": "raid_bdev1", 00:18:41.307 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:41.307 "strip_size_kb": 0, 00:18:41.307 "state": "configuring", 00:18:41.307 "raid_level": "raid1", 00:18:41.307 "superblock": true, 00:18:41.307 "num_base_bdevs": 4, 00:18:41.307 "num_base_bdevs_discovered": 1, 00:18:41.307 "num_base_bdevs_operational": 4, 00:18:41.307 "base_bdevs_list": [ 00:18:41.307 { 00:18:41.307 "name": "pt1", 00:18:41.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.307 "is_configured": true, 00:18:41.307 "data_offset": 2048, 00:18:41.307 "data_size": 63488 00:18:41.307 }, 00:18:41.307 { 00:18:41.307 "name": null, 00:18:41.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.307 "is_configured": false, 00:18:41.308 "data_offset": 0, 00:18:41.308 "data_size": 63488 00:18:41.308 }, 00:18:41.308 { 00:18:41.308 "name": null, 00:18:41.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.308 "is_configured": false, 00:18:41.308 "data_offset": 2048, 00:18:41.308 "data_size": 63488 00:18:41.308 }, 00:18:41.308 { 00:18:41.308 "name": null, 00:18:41.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:41.308 "is_configured": false, 00:18:41.308 "data_offset": 2048, 00:18:41.308 "data_size": 63488 00:18:41.308 } 00:18:41.308 ] 00:18:41.308 }' 00:18:41.308 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.308 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 [2024-12-06 13:12:48.160906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.876 [2024-12-06 13:12:48.161035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.876 [2024-12-06 13:12:48.161069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:41.876 [2024-12-06 13:12:48.161084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.876 [2024-12-06 13:12:48.161770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.876 [2024-12-06 13:12:48.161821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.876 [2024-12-06 13:12:48.161938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.876 [2024-12-06 13:12:48.161972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.876 pt2 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 [2024-12-06 13:12:48.168858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:41.876 [2024-12-06 13:12:48.168924] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.876 [2024-12-06 13:12:48.168962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:41.876 [2024-12-06 13:12:48.168978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.876 [2024-12-06 13:12:48.169531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.876 [2024-12-06 13:12:48.169567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:41.876 [2024-12-06 13:12:48.169674] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:41.876 [2024-12-06 13:12:48.169711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:41.876 pt3 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.876 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 [2024-12-06 13:12:48.176774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:41.876 [2024-12-06 13:12:48.176821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.876 [2024-12-06 13:12:48.176846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:41.877 [2024-12-06 13:12:48.176860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.877 [2024-12-06 13:12:48.177372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.877 [2024-12-06 13:12:48.177422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:41.877 [2024-12-06 13:12:48.177518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:41.877 [2024-12-06 13:12:48.177553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:41.877 [2024-12-06 13:12:48.177728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.877 [2024-12-06 13:12:48.177743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:41.877 [2024-12-06 13:12:48.178083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:41.877 [2024-12-06 13:12:48.178325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.877 [2024-12-06 13:12:48.178355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:41.877 [2024-12-06 13:12:48.178578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.877 pt4 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.877 "name": "raid_bdev1", 00:18:41.877 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:41.877 "strip_size_kb": 0, 00:18:41.877 "state": "online", 00:18:41.877 "raid_level": "raid1", 00:18:41.877 "superblock": true, 00:18:41.877 "num_base_bdevs": 4, 00:18:41.877 "num_base_bdevs_discovered": 4, 00:18:41.877 "num_base_bdevs_operational": 4, 00:18:41.877 "base_bdevs_list": [ 00:18:41.877 { 00:18:41.877 "name": "pt1", 00:18:41.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 2048, 00:18:41.877 "data_size": 63488 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": "pt2", 00:18:41.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 2048, 00:18:41.877 "data_size": 63488 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": "pt3", 00:18:41.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 2048, 00:18:41.877 "data_size": 63488 00:18:41.877 }, 00:18:41.877 { 00:18:41.877 "name": "pt4", 00:18:41.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:41.877 "is_configured": true, 00:18:41.877 "data_offset": 2048, 00:18:41.877 "data_size": 63488 00:18:41.877 } 00:18:41.877 ] 00:18:41.877 }' 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.877 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.446 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 [2024-12-06 13:12:48.713572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.447 "name": "raid_bdev1", 00:18:42.447 "aliases": [ 00:18:42.447 "26d92589-7288-4f91-8d67-e6ab2ed4b0bb" 00:18:42.447 ], 00:18:42.447 "product_name": "Raid Volume", 00:18:42.447 "block_size": 512, 00:18:42.447 "num_blocks": 63488, 00:18:42.447 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:42.447 "assigned_rate_limits": { 00:18:42.447 "rw_ios_per_sec": 0, 00:18:42.447 "rw_mbytes_per_sec": 0, 00:18:42.447 "r_mbytes_per_sec": 0, 00:18:42.447 "w_mbytes_per_sec": 0 00:18:42.447 }, 00:18:42.447 "claimed": false, 00:18:42.447 "zoned": false, 00:18:42.447 "supported_io_types": { 00:18:42.447 "read": true, 00:18:42.447 "write": true, 00:18:42.447 "unmap": false, 00:18:42.447 "flush": false, 00:18:42.447 "reset": true, 00:18:42.447 "nvme_admin": false, 00:18:42.447 "nvme_io": false, 00:18:42.447 "nvme_io_md": false, 00:18:42.447 "write_zeroes": true, 00:18:42.447 "zcopy": false, 00:18:42.447 "get_zone_info": false, 00:18:42.447 "zone_management": false, 00:18:42.447 "zone_append": false, 00:18:42.447 "compare": false, 00:18:42.447 "compare_and_write": false, 00:18:42.447 "abort": false, 00:18:42.447 "seek_hole": false, 00:18:42.447 "seek_data": false, 00:18:42.447 "copy": false, 00:18:42.447 "nvme_iov_md": false 00:18:42.447 }, 00:18:42.447 "memory_domains": [ 00:18:42.447 { 00:18:42.447 "dma_device_id": "system", 00:18:42.447 "dma_device_type": 1 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.447 "dma_device_type": 2 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "system", 00:18:42.447 "dma_device_type": 1 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.447 "dma_device_type": 2 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "system", 00:18:42.447 "dma_device_type": 1 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.447 "dma_device_type": 2 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "system", 00:18:42.447 "dma_device_type": 1 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.447 "dma_device_type": 2 00:18:42.447 } 00:18:42.447 ], 00:18:42.447 "driver_specific": { 00:18:42.447 "raid": { 00:18:42.447 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:42.447 "strip_size_kb": 0, 00:18:42.447 "state": "online", 00:18:42.447 "raid_level": "raid1", 00:18:42.447 "superblock": true, 00:18:42.447 "num_base_bdevs": 4, 00:18:42.447 "num_base_bdevs_discovered": 4, 00:18:42.447 "num_base_bdevs_operational": 4, 00:18:42.447 "base_bdevs_list": [ 00:18:42.447 { 00:18:42.447 "name": "pt1", 00:18:42.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.447 "is_configured": true, 00:18:42.447 "data_offset": 2048, 00:18:42.447 "data_size": 63488 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "name": "pt2", 00:18:42.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.447 "is_configured": true, 00:18:42.447 "data_offset": 2048, 00:18:42.447 "data_size": 63488 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "name": "pt3", 00:18:42.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.447 "is_configured": true, 00:18:42.447 "data_offset": 2048, 00:18:42.447 "data_size": 63488 00:18:42.447 }, 00:18:42.447 { 00:18:42.447 "name": "pt4", 00:18:42.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:42.447 "is_configured": true, 00:18:42.447 "data_offset": 2048, 00:18:42.447 "data_size": 63488 00:18:42.447 } 00:18:42.447 ] 00:18:42.447 } 00:18:42.447 } 00:18:42.447 }' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:42.447 pt2 00:18:42.447 pt3 00:18:42.447 pt4' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.706 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.706 [2024-12-06 13:12:49.085541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 26d92589-7288-4f91-8d67-e6ab2ed4b0bb '!=' 26d92589-7288-4f91-8d67-e6ab2ed4b0bb ']' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.706 [2024-12-06 13:12:49.133203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.706 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.706 "name": "raid_bdev1", 00:18:42.706 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:42.706 "strip_size_kb": 0, 00:18:42.706 "state": "online", 00:18:42.706 "raid_level": "raid1", 00:18:42.706 "superblock": true, 00:18:42.706 "num_base_bdevs": 4, 00:18:42.706 "num_base_bdevs_discovered": 3, 00:18:42.707 "num_base_bdevs_operational": 3, 00:18:42.707 "base_bdevs_list": [ 00:18:42.707 { 00:18:42.707 "name": null, 00:18:42.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.707 "is_configured": false, 00:18:42.707 "data_offset": 0, 00:18:42.707 "data_size": 63488 00:18:42.707 }, 00:18:42.707 { 00:18:42.707 "name": "pt2", 00:18:42.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.707 "is_configured": true, 00:18:42.707 "data_offset": 2048, 00:18:42.707 "data_size": 63488 00:18:42.707 }, 00:18:42.707 { 00:18:42.707 "name": "pt3", 00:18:42.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.707 "is_configured": true, 00:18:42.707 "data_offset": 2048, 00:18:42.707 "data_size": 63488 00:18:42.707 }, 00:18:42.707 { 00:18:42.707 "name": "pt4", 00:18:42.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:42.707 "is_configured": true, 00:18:42.707 "data_offset": 2048, 00:18:42.707 "data_size": 63488 00:18:42.707 } 00:18:42.707 ] 00:18:42.707 }' 00:18:42.707 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.707 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 [2024-12-06 13:12:49.657316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.275 [2024-12-06 13:12:49.657379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.275 [2024-12-06 13:12:49.657506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.275 [2024-12-06 13:12:49.657617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.275 [2024-12-06 13:12:49.657648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.275 [2024-12-06 13:12:49.737300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.275 [2024-12-06 13:12:49.737389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.275 [2024-12-06 13:12:49.737421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:43.275 [2024-12-06 13:12:49.737437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.275 [2024-12-06 13:12:49.740788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.275 [2024-12-06 13:12:49.740835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.275 [2024-12-06 13:12:49.740984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:43.275 [2024-12-06 13:12:49.741049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.275 pt2 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.275 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.276 "name": "raid_bdev1", 00:18:43.276 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:43.276 "strip_size_kb": 0, 00:18:43.276 "state": "configuring", 00:18:43.276 "raid_level": "raid1", 00:18:43.276 "superblock": true, 00:18:43.276 "num_base_bdevs": 4, 00:18:43.276 "num_base_bdevs_discovered": 1, 00:18:43.276 "num_base_bdevs_operational": 3, 00:18:43.276 "base_bdevs_list": [ 00:18:43.276 { 00:18:43.276 "name": null, 00:18:43.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.276 "is_configured": false, 00:18:43.276 "data_offset": 2048, 00:18:43.276 "data_size": 63488 00:18:43.276 }, 00:18:43.276 { 00:18:43.276 "name": "pt2", 00:18:43.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.276 "is_configured": true, 00:18:43.276 "data_offset": 2048, 00:18:43.276 "data_size": 63488 00:18:43.276 }, 00:18:43.276 { 00:18:43.276 "name": null, 00:18:43.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.276 "is_configured": false, 00:18:43.276 "data_offset": 2048, 00:18:43.276 "data_size": 63488 00:18:43.276 }, 00:18:43.276 { 00:18:43.276 "name": null, 00:18:43.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:43.276 "is_configured": false, 00:18:43.276 "data_offset": 2048, 00:18:43.276 "data_size": 63488 00:18:43.276 } 00:18:43.276 ] 00:18:43.276 }' 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.276 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.843 [2024-12-06 13:12:50.229520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.843 [2024-12-06 13:12:50.229622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.843 [2024-12-06 13:12:50.229659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:43.843 [2024-12-06 13:12:50.229675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.843 [2024-12-06 13:12:50.230342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.843 [2024-12-06 13:12:50.230374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.843 [2024-12-06 13:12:50.230514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:43.843 [2024-12-06 13:12:50.230550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.843 pt3 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.843 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.843 "name": "raid_bdev1", 00:18:43.843 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:43.843 "strip_size_kb": 0, 00:18:43.843 "state": "configuring", 00:18:43.843 "raid_level": "raid1", 00:18:43.843 "superblock": true, 00:18:43.843 "num_base_bdevs": 4, 00:18:43.843 "num_base_bdevs_discovered": 2, 00:18:43.843 "num_base_bdevs_operational": 3, 00:18:43.843 "base_bdevs_list": [ 00:18:43.843 { 00:18:43.844 "name": null, 00:18:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.844 "is_configured": false, 00:18:43.844 "data_offset": 2048, 00:18:43.844 "data_size": 63488 00:18:43.844 }, 00:18:43.844 { 00:18:43.844 "name": "pt2", 00:18:43.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.844 "is_configured": true, 00:18:43.844 "data_offset": 2048, 00:18:43.844 "data_size": 63488 00:18:43.844 }, 00:18:43.844 { 00:18:43.844 "name": "pt3", 00:18:43.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.844 "is_configured": true, 00:18:43.844 "data_offset": 2048, 00:18:43.844 "data_size": 63488 00:18:43.844 }, 00:18:43.844 { 00:18:43.844 "name": null, 00:18:43.844 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:43.844 "is_configured": false, 00:18:43.844 "data_offset": 2048, 00:18:43.844 "data_size": 63488 00:18:43.844 } 00:18:43.844 ] 00:18:43.844 }' 00:18:43.844 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.844 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 [2024-12-06 13:12:50.757747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:44.411 [2024-12-06 13:12:50.757862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.411 [2024-12-06 13:12:50.757902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:44.411 [2024-12-06 13:12:50.757918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.411 [2024-12-06 13:12:50.758599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.411 [2024-12-06 13:12:50.758632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:44.411 [2024-12-06 13:12:50.758756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:44.411 [2024-12-06 13:12:50.758806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:44.411 [2024-12-06 13:12:50.758977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:44.411 [2024-12-06 13:12:50.759000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:44.411 [2024-12-06 13:12:50.759337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:44.411 [2024-12-06 13:12:50.759586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:44.411 [2024-12-06 13:12:50.759615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:44.411 [2024-12-06 13:12:50.759820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.411 pt4 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.411 "name": "raid_bdev1", 00:18:44.411 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:44.411 "strip_size_kb": 0, 00:18:44.411 "state": "online", 00:18:44.411 "raid_level": "raid1", 00:18:44.411 "superblock": true, 00:18:44.411 "num_base_bdevs": 4, 00:18:44.411 "num_base_bdevs_discovered": 3, 00:18:44.411 "num_base_bdevs_operational": 3, 00:18:44.411 "base_bdevs_list": [ 00:18:44.411 { 00:18:44.411 "name": null, 00:18:44.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.411 "is_configured": false, 00:18:44.411 "data_offset": 2048, 00:18:44.411 "data_size": 63488 00:18:44.411 }, 00:18:44.411 { 00:18:44.411 "name": "pt2", 00:18:44.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.411 "is_configured": true, 00:18:44.411 "data_offset": 2048, 00:18:44.411 "data_size": 63488 00:18:44.411 }, 00:18:44.411 { 00:18:44.411 "name": "pt3", 00:18:44.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.411 "is_configured": true, 00:18:44.411 "data_offset": 2048, 00:18:44.411 "data_size": 63488 00:18:44.411 }, 00:18:44.411 { 00:18:44.411 "name": "pt4", 00:18:44.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:44.411 "is_configured": true, 00:18:44.411 "data_offset": 2048, 00:18:44.411 "data_size": 63488 00:18:44.411 } 00:18:44.411 ] 00:18:44.411 }' 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.411 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 [2024-12-06 13:12:51.261797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.978 [2024-12-06 13:12:51.261865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.978 [2024-12-06 13:12:51.261976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.978 [2024-12-06 13:12:51.262102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.978 [2024-12-06 13:12:51.262129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.978 [2024-12-06 13:12:51.333804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.978 [2024-12-06 13:12:51.333904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.978 [2024-12-06 13:12:51.333947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:44.978 [2024-12-06 13:12:51.333966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.978 [2024-12-06 13:12:51.337003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.978 [2024-12-06 13:12:51.337066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.978 [2024-12-06 13:12:51.337175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:44.978 [2024-12-06 13:12:51.337246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.978 [2024-12-06 13:12:51.337441] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:44.978 [2024-12-06 13:12:51.337489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.978 [2024-12-06 13:12:51.337513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:44.978 [2024-12-06 13:12:51.337589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.978 [2024-12-06 13:12:51.337749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:44.978 pt1 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.978 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.979 "name": "raid_bdev1", 00:18:44.979 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:44.979 "strip_size_kb": 0, 00:18:44.979 "state": "configuring", 00:18:44.979 "raid_level": "raid1", 00:18:44.979 "superblock": true, 00:18:44.979 "num_base_bdevs": 4, 00:18:44.979 "num_base_bdevs_discovered": 2, 00:18:44.979 "num_base_bdevs_operational": 3, 00:18:44.979 "base_bdevs_list": [ 00:18:44.979 { 00:18:44.979 "name": null, 00:18:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.979 "is_configured": false, 00:18:44.979 "data_offset": 2048, 00:18:44.979 "data_size": 63488 00:18:44.979 }, 00:18:44.979 { 00:18:44.979 "name": "pt2", 00:18:44.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.979 "is_configured": true, 00:18:44.979 "data_offset": 2048, 00:18:44.979 "data_size": 63488 00:18:44.979 }, 00:18:44.979 { 00:18:44.979 "name": "pt3", 00:18:44.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.979 "is_configured": true, 00:18:44.979 "data_offset": 2048, 00:18:44.979 "data_size": 63488 00:18:44.979 }, 00:18:44.979 { 00:18:44.979 "name": null, 00:18:44.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:44.979 "is_configured": false, 00:18:44.979 "data_offset": 2048, 00:18:44.979 "data_size": 63488 00:18:44.979 } 00:18:44.979 ] 00:18:44.979 }' 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.979 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 [2024-12-06 13:12:51.902115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:45.546 [2024-12-06 13:12:51.902235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.546 [2024-12-06 13:12:51.902276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:45.546 [2024-12-06 13:12:51.902293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.546 [2024-12-06 13:12:51.902955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.546 [2024-12-06 13:12:51.902989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:45.546 [2024-12-06 13:12:51.903114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:45.546 [2024-12-06 13:12:51.903149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:45.546 [2024-12-06 13:12:51.903353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:45.546 [2024-12-06 13:12:51.903377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.546 [2024-12-06 13:12:51.903795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:45.546 [2024-12-06 13:12:51.904047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:45.546 [2024-12-06 13:12:51.904076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:45.546 [2024-12-06 13:12:51.904277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.546 pt4 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.546 "name": "raid_bdev1", 00:18:45.546 "uuid": "26d92589-7288-4f91-8d67-e6ab2ed4b0bb", 00:18:45.546 "strip_size_kb": 0, 00:18:45.546 "state": "online", 00:18:45.546 "raid_level": "raid1", 00:18:45.546 "superblock": true, 00:18:45.546 "num_base_bdevs": 4, 00:18:45.546 "num_base_bdevs_discovered": 3, 00:18:45.546 "num_base_bdevs_operational": 3, 00:18:45.546 "base_bdevs_list": [ 00:18:45.546 { 00:18:45.546 "name": null, 00:18:45.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.546 "is_configured": false, 00:18:45.546 "data_offset": 2048, 00:18:45.546 "data_size": 63488 00:18:45.546 }, 00:18:45.546 { 00:18:45.546 "name": "pt2", 00:18:45.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.546 "is_configured": true, 00:18:45.546 "data_offset": 2048, 00:18:45.546 "data_size": 63488 00:18:45.546 }, 00:18:45.546 { 00:18:45.546 "name": "pt3", 00:18:45.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:45.546 "is_configured": true, 00:18:45.546 "data_offset": 2048, 00:18:45.546 "data_size": 63488 00:18:45.546 }, 00:18:45.546 { 00:18:45.546 "name": "pt4", 00:18:45.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:45.546 "is_configured": true, 00:18:45.546 "data_offset": 2048, 00:18:45.546 "data_size": 63488 00:18:45.546 } 00:18:45.546 ] 00:18:45.546 }' 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.546 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:46.117 [2024-12-06 13:12:52.494703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 26d92589-7288-4f91-8d67-e6ab2ed4b0bb '!=' 26d92589-7288-4f91-8d67-e6ab2ed4b0bb ']' 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74978 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74978 ']' 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74978 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74978 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.117 killing process with pid 74978 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74978' 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74978 00:18:46.117 [2024-12-06 13:12:52.579961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.117 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74978 00:18:46.117 [2024-12-06 13:12:52.580137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.117 [2024-12-06 13:12:52.580255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.117 [2024-12-06 13:12:52.580285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:46.684 [2024-12-06 13:12:52.933724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.620 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:47.620 00:18:47.620 real 0m9.469s 00:18:47.620 user 0m15.469s 00:18:47.620 sys 0m1.442s 00:18:47.620 13:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.620 ************************************ 00:18:47.620 END TEST raid_superblock_test 00:18:47.620 ************************************ 00:18:47.620 13:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.620 13:12:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:47.620 13:12:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:47.620 13:12:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.620 13:12:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.620 ************************************ 00:18:47.620 START TEST raid_read_error_test 00:18:47.620 ************************************ 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8XOkf9Ybl3 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75473 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75473 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75473 ']' 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.620 13:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.877 [2024-12-06 13:12:54.258382] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:47.877 [2024-12-06 13:12:54.258723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75473 ] 00:18:48.137 [2024-12-06 13:12:54.446577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.137 [2024-12-06 13:12:54.595135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.395 [2024-12-06 13:12:54.817597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.395 [2024-12-06 13:12:54.817671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 BaseBdev1_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 true 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 [2024-12-06 13:12:55.313239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:48.964 [2024-12-06 13:12:55.313316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.964 [2024-12-06 13:12:55.313349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:48.964 [2024-12-06 13:12:55.313383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.964 [2024-12-06 13:12:55.316591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.964 [2024-12-06 13:12:55.316642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.964 BaseBdev1 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 BaseBdev2_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 true 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 [2024-12-06 13:12:55.378545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:48.964 [2024-12-06 13:12:55.378661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.964 [2024-12-06 13:12:55.378690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:48.964 [2024-12-06 13:12:55.378708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.964 [2024-12-06 13:12:55.381927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.964 [2024-12-06 13:12:55.381976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:48.964 BaseBdev2 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 BaseBdev3_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 true 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.964 [2024-12-06 13:12:55.450707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:48.964 [2024-12-06 13:12:55.450844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.964 [2024-12-06 13:12:55.450893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:48.964 [2024-12-06 13:12:55.450913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.964 [2024-12-06 13:12:55.454338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.964 [2024-12-06 13:12:55.454394] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:48.964 BaseBdev3 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.964 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 BaseBdev4_malloc 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 true 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 [2024-12-06 13:12:55.511601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:49.222 [2024-12-06 13:12:55.511678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.222 [2024-12-06 13:12:55.511716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:49.222 [2024-12-06 13:12:55.511735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.222 [2024-12-06 13:12:55.514957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.222 [2024-12-06 13:12:55.515010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:49.222 BaseBdev4 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 [2024-12-06 13:12:55.519728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.222 [2024-12-06 13:12:55.522362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.222 [2024-12-06 13:12:55.522509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:49.222 [2024-12-06 13:12:55.522617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.222 [2024-12-06 13:12:55.522952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:49.222 [2024-12-06 13:12:55.522983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:49.222 [2024-12-06 13:12:55.523324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:49.222 [2024-12-06 13:12:55.523608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:49.222 [2024-12-06 13:12:55.523633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:49.222 [2024-12-06 13:12:55.523904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.222 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.222 "name": "raid_bdev1", 00:18:49.222 "uuid": "89f61450-9b2f-4272-9aad-256c9ae98534", 00:18:49.222 "strip_size_kb": 0, 00:18:49.222 "state": "online", 00:18:49.222 "raid_level": "raid1", 00:18:49.222 "superblock": true, 00:18:49.222 "num_base_bdevs": 4, 00:18:49.222 "num_base_bdevs_discovered": 4, 00:18:49.222 "num_base_bdevs_operational": 4, 00:18:49.222 "base_bdevs_list": [ 00:18:49.222 { 00:18:49.222 "name": "BaseBdev1", 00:18:49.222 "uuid": "1f2c98bd-f448-5759-9d3e-ed60b2f42f43", 00:18:49.222 "is_configured": true, 00:18:49.222 "data_offset": 2048, 00:18:49.222 "data_size": 63488 00:18:49.222 }, 00:18:49.222 { 00:18:49.222 "name": "BaseBdev2", 00:18:49.222 "uuid": "af57680d-9b39-567f-9084-95a98d5a2e26", 00:18:49.222 "is_configured": true, 00:18:49.222 "data_offset": 2048, 00:18:49.222 "data_size": 63488 00:18:49.222 }, 00:18:49.222 { 00:18:49.222 "name": "BaseBdev3", 00:18:49.222 "uuid": "3c357853-719a-57a4-962b-00ef822dbc4b", 00:18:49.222 "is_configured": true, 00:18:49.222 "data_offset": 2048, 00:18:49.222 "data_size": 63488 00:18:49.222 }, 00:18:49.222 { 00:18:49.222 "name": "BaseBdev4", 00:18:49.223 "uuid": "a3c1568e-c98c-53af-a94e-e01c72a5a65a", 00:18:49.223 "is_configured": true, 00:18:49.223 "data_offset": 2048, 00:18:49.223 "data_size": 63488 00:18:49.223 } 00:18:49.223 ] 00:18:49.223 }' 00:18:49.223 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.223 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.791 13:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:49.791 13:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:49.791 [2024-12-06 13:12:56.157575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:50.727 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.728 "name": "raid_bdev1", 00:18:50.728 "uuid": "89f61450-9b2f-4272-9aad-256c9ae98534", 00:18:50.728 "strip_size_kb": 0, 00:18:50.728 "state": "online", 00:18:50.728 "raid_level": "raid1", 00:18:50.728 "superblock": true, 00:18:50.728 "num_base_bdevs": 4, 00:18:50.728 "num_base_bdevs_discovered": 4, 00:18:50.728 "num_base_bdevs_operational": 4, 00:18:50.728 "base_bdevs_list": [ 00:18:50.728 { 00:18:50.728 "name": "BaseBdev1", 00:18:50.728 "uuid": "1f2c98bd-f448-5759-9d3e-ed60b2f42f43", 00:18:50.728 "is_configured": true, 00:18:50.728 "data_offset": 2048, 00:18:50.728 "data_size": 63488 00:18:50.728 }, 00:18:50.728 { 00:18:50.728 "name": "BaseBdev2", 00:18:50.728 "uuid": "af57680d-9b39-567f-9084-95a98d5a2e26", 00:18:50.728 "is_configured": true, 00:18:50.728 "data_offset": 2048, 00:18:50.728 "data_size": 63488 00:18:50.728 }, 00:18:50.728 { 00:18:50.728 "name": "BaseBdev3", 00:18:50.728 "uuid": "3c357853-719a-57a4-962b-00ef822dbc4b", 00:18:50.728 "is_configured": true, 00:18:50.728 "data_offset": 2048, 00:18:50.728 "data_size": 63488 00:18:50.728 }, 00:18:50.728 { 00:18:50.728 "name": "BaseBdev4", 00:18:50.728 "uuid": "a3c1568e-c98c-53af-a94e-e01c72a5a65a", 00:18:50.728 "is_configured": true, 00:18:50.728 "data_offset": 2048, 00:18:50.728 "data_size": 63488 00:18:50.728 } 00:18:50.728 ] 00:18:50.728 }' 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.728 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.296 [2024-12-06 13:12:57.558828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.296 [2024-12-06 13:12:57.558870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.296 [2024-12-06 13:12:57.562243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.296 [2024-12-06 13:12:57.562333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.296 [2024-12-06 13:12:57.562558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.296 [2024-12-06 13:12:57.562588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:51.296 { 00:18:51.296 "results": [ 00:18:51.296 { 00:18:51.296 "job": "raid_bdev1", 00:18:51.296 "core_mask": "0x1", 00:18:51.296 "workload": "randrw", 00:18:51.296 "percentage": 50, 00:18:51.296 "status": "finished", 00:18:51.296 "queue_depth": 1, 00:18:51.296 "io_size": 131072, 00:18:51.296 "runtime": 1.398807, 00:18:51.296 "iops": 6514.837286344721, 00:18:51.296 "mibps": 814.3546607930901, 00:18:51.296 "io_failed": 0, 00:18:51.296 "io_timeout": 0, 00:18:51.296 "avg_latency_us": 149.30783795377232, 00:18:51.296 "min_latency_us": 40.72727272727273, 00:18:51.296 "max_latency_us": 2010.7636363636364 00:18:51.296 } 00:18:51.296 ], 00:18:51.296 "core_count": 1 00:18:51.296 } 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75473 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75473 ']' 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75473 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75473 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.296 killing process with pid 75473 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75473' 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75473 00:18:51.296 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75473 00:18:51.296 [2024-12-06 13:12:57.600221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.555 [2024-12-06 13:12:57.906013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8XOkf9Ybl3 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:52.930 00:18:52.930 real 0m5.008s 00:18:52.930 user 0m6.047s 00:18:52.930 sys 0m0.724s 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.930 13:12:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.930 ************************************ 00:18:52.930 END TEST raid_read_error_test 00:18:52.930 ************************************ 00:18:52.930 13:12:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:52.930 13:12:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:52.930 13:12:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.930 13:12:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.930 ************************************ 00:18:52.930 START TEST raid_write_error_test 00:18:52.930 ************************************ 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.es2XwWBRYk 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75619 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75619 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75619 ']' 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.930 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.930 [2024-12-06 13:12:59.300868] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:52.930 [2024-12-06 13:12:59.301051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75619 ] 00:18:53.189 [2024-12-06 13:12:59.479522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.189 [2024-12-06 13:12:59.619507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.447 [2024-12-06 13:12:59.832964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.447 [2024-12-06 13:12:59.833035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.705 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 BaseBdev1_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 true 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 [2024-12-06 13:13:00.276066] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:53.964 [2024-12-06 13:13:00.276392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.964 [2024-12-06 13:13:00.276438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:53.964 [2024-12-06 13:13:00.276482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.964 [2024-12-06 13:13:00.279580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.964 [2024-12-06 13:13:00.279633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:53.964 BaseBdev1 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 BaseBdev2_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 true 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 [2024-12-06 13:13:00.345099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:53.964 [2024-12-06 13:13:00.345351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.964 [2024-12-06 13:13:00.345392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:53.964 [2024-12-06 13:13:00.345412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.964 [2024-12-06 13:13:00.348585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.964 [2024-12-06 13:13:00.348636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:53.964 BaseBdev2 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 BaseBdev3_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 true 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 [2024-12-06 13:13:00.425964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:53.964 [2024-12-06 13:13:00.426228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.964 [2024-12-06 13:13:00.426306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:53.964 [2024-12-06 13:13:00.426534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.964 [2024-12-06 13:13:00.429608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.964 [2024-12-06 13:13:00.429657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:53.964 BaseBdev3 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 BaseBdev4_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.964 true 00:18:53.964 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.223 [2024-12-06 13:13:00.492942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:54.223 [2024-12-06 13:13:00.493256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.223 [2024-12-06 13:13:00.493301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:54.223 [2024-12-06 13:13:00.493323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.223 [2024-12-06 13:13:00.496766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.223 [2024-12-06 13:13:00.496936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:54.223 BaseBdev4 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.223 [2024-12-06 13:13:00.501343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.223 [2024-12-06 13:13:00.504165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.223 [2024-12-06 13:13:00.504409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.223 [2024-12-06 13:13:00.504653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.223 [2024-12-06 13:13:00.505109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:54.223 [2024-12-06 13:13:00.505252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:54.223 [2024-12-06 13:13:00.505694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:54.223 [2024-12-06 13:13:00.506078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:54.223 [2024-12-06 13:13:00.506228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:54.223 [2024-12-06 13:13:00.506685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.223 "name": "raid_bdev1", 00:18:54.223 "uuid": "6f9627ee-87da-4b0f-942c-0619718f1d8c", 00:18:54.223 "strip_size_kb": 0, 00:18:54.223 "state": "online", 00:18:54.223 "raid_level": "raid1", 00:18:54.223 "superblock": true, 00:18:54.223 "num_base_bdevs": 4, 00:18:54.223 "num_base_bdevs_discovered": 4, 00:18:54.223 "num_base_bdevs_operational": 4, 00:18:54.223 "base_bdevs_list": [ 00:18:54.223 { 00:18:54.223 "name": "BaseBdev1", 00:18:54.223 "uuid": "35a440d5-9faa-5f2f-a551-0be9edb7ad74", 00:18:54.223 "is_configured": true, 00:18:54.223 "data_offset": 2048, 00:18:54.223 "data_size": 63488 00:18:54.223 }, 00:18:54.223 { 00:18:54.223 "name": "BaseBdev2", 00:18:54.223 "uuid": "7e09bfca-4b8e-5503-b031-ad9008e3c85c", 00:18:54.223 "is_configured": true, 00:18:54.223 "data_offset": 2048, 00:18:54.223 "data_size": 63488 00:18:54.223 }, 00:18:54.223 { 00:18:54.223 "name": "BaseBdev3", 00:18:54.223 "uuid": "788bc0de-e6a6-56ba-a669-24cb26e1af0f", 00:18:54.223 "is_configured": true, 00:18:54.223 "data_offset": 2048, 00:18:54.223 "data_size": 63488 00:18:54.223 }, 00:18:54.223 { 00:18:54.223 "name": "BaseBdev4", 00:18:54.223 "uuid": "d545dcf9-9514-5e2f-992c-46a1bd335f9b", 00:18:54.223 "is_configured": true, 00:18:54.223 "data_offset": 2048, 00:18:54.223 "data_size": 63488 00:18:54.223 } 00:18:54.223 ] 00:18:54.223 }' 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.223 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.788 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:54.788 13:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:54.788 [2024-12-06 13:13:01.176579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.800 [2024-12-06 13:13:02.038348] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:55.800 [2024-12-06 13:13:02.038700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.800 [2024-12-06 13:13:02.039075] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.800 "name": "raid_bdev1", 00:18:55.800 "uuid": "6f9627ee-87da-4b0f-942c-0619718f1d8c", 00:18:55.800 "strip_size_kb": 0, 00:18:55.800 "state": "online", 00:18:55.800 "raid_level": "raid1", 00:18:55.800 "superblock": true, 00:18:55.800 "num_base_bdevs": 4, 00:18:55.800 "num_base_bdevs_discovered": 3, 00:18:55.800 "num_base_bdevs_operational": 3, 00:18:55.800 "base_bdevs_list": [ 00:18:55.800 { 00:18:55.800 "name": null, 00:18:55.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.800 "is_configured": false, 00:18:55.800 "data_offset": 0, 00:18:55.800 "data_size": 63488 00:18:55.800 }, 00:18:55.800 { 00:18:55.800 "name": "BaseBdev2", 00:18:55.800 "uuid": "7e09bfca-4b8e-5503-b031-ad9008e3c85c", 00:18:55.800 "is_configured": true, 00:18:55.800 "data_offset": 2048, 00:18:55.800 "data_size": 63488 00:18:55.800 }, 00:18:55.800 { 00:18:55.800 "name": "BaseBdev3", 00:18:55.800 "uuid": "788bc0de-e6a6-56ba-a669-24cb26e1af0f", 00:18:55.800 "is_configured": true, 00:18:55.800 "data_offset": 2048, 00:18:55.800 "data_size": 63488 00:18:55.800 }, 00:18:55.800 { 00:18:55.800 "name": "BaseBdev4", 00:18:55.800 "uuid": "d545dcf9-9514-5e2f-992c-46a1bd335f9b", 00:18:55.800 "is_configured": true, 00:18:55.800 "data_offset": 2048, 00:18:55.800 "data_size": 63488 00:18:55.800 } 00:18:55.800 ] 00:18:55.800 }' 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.800 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.059 [2024-12-06 13:13:02.556502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.059 [2024-12-06 13:13:02.556563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.059 [2024-12-06 13:13:02.560405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.059 [2024-12-06 13:13:02.560519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.059 { 00:18:56.059 "results": [ 00:18:56.059 { 00:18:56.059 "job": "raid_bdev1", 00:18:56.059 "core_mask": "0x1", 00:18:56.059 "workload": "randrw", 00:18:56.059 "percentage": 50, 00:18:56.059 "status": "finished", 00:18:56.059 "queue_depth": 1, 00:18:56.059 "io_size": 131072, 00:18:56.059 "runtime": 1.377178, 00:18:56.059 "iops": 6875.654417947426, 00:18:56.059 "mibps": 859.4568022434282, 00:18:56.059 "io_failed": 0, 00:18:56.059 "io_timeout": 0, 00:18:56.059 "avg_latency_us": 141.02815580026692, 00:18:56.059 "min_latency_us": 41.42545454545454, 00:18:56.059 "max_latency_us": 1921.3963636363637 00:18:56.059 } 00:18:56.059 ], 00:18:56.059 "core_count": 1 00:18:56.059 } 00:18:56.059 [2024-12-06 13:13:02.560695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.059 [2024-12-06 13:13:02.560722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75619 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75619 ']' 00:18:56.059 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75619 00:18:56.060 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:18:56.060 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.060 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75619 00:18:56.318 killing process with pid 75619 00:18:56.318 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.318 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.318 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75619' 00:18:56.318 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75619 00:18:56.318 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75619 00:18:56.318 [2024-12-06 13:13:02.604506] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.577 [2024-12-06 13:13:02.918909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.953 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.es2XwWBRYk 00:18:57.953 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:57.953 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:57.953 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:57.953 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:57.954 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.954 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.954 13:13:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:57.954 00:18:57.954 real 0m4.955s 00:18:57.954 user 0m5.946s 00:18:57.954 sys 0m0.692s 00:18:57.954 13:13:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.954 ************************************ 00:18:57.954 END TEST raid_write_error_test 00:18:57.954 ************************************ 00:18:57.954 13:13:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.954 13:13:04 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:18:57.954 13:13:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:57.954 13:13:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:57.954 13:13:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:57.954 13:13:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.954 13:13:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.954 ************************************ 00:18:57.954 START TEST raid_rebuild_test 00:18:57.954 ************************************ 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75768 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75768 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75768 ']' 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.954 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.954 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:57.954 Zero copy mechanism will not be used. 00:18:57.954 [2024-12-06 13:13:04.295662] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:18:57.954 [2024-12-06 13:13:04.295826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75768 ] 00:18:57.954 [2024-12-06 13:13:04.471286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.213 [2024-12-06 13:13:04.619075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.472 [2024-12-06 13:13:04.845461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.472 [2024-12-06 13:13:04.845553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 BaseBdev1_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 [2024-12-06 13:13:05.373578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.039 [2024-12-06 13:13:05.373880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.039 [2024-12-06 13:13:05.373941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:59.039 [2024-12-06 13:13:05.373964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.039 [2024-12-06 13:13:05.376984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.039 [2024-12-06 13:13:05.377035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.039 BaseBdev1 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 BaseBdev2_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 [2024-12-06 13:13:05.424756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:59.039 [2024-12-06 13:13:05.424990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.039 [2024-12-06 13:13:05.425094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:59.039 [2024-12-06 13:13:05.425254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.039 [2024-12-06 13:13:05.428174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.039 BaseBdev2 00:18:59.039 [2024-12-06 13:13:05.428401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 spare_malloc 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 spare_delay 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 [2024-12-06 13:13:05.492648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.039 [2024-12-06 13:13:05.492982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.039 [2024-12-06 13:13:05.493038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:59.039 [2024-12-06 13:13:05.493060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.039 [2024-12-06 13:13:05.496129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.039 [2024-12-06 13:13:05.496186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.039 spare 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 [2024-12-06 13:13:05.500895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.039 [2024-12-06 13:13:05.503842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.039 [2024-12-06 13:13:05.504144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:59.039 [2024-12-06 13:13:05.504182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:59.039 [2024-12-06 13:13:05.504543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:59.039 [2024-12-06 13:13:05.504760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:59.039 [2024-12-06 13:13:05.504779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:59.039 [2024-12-06 13:13:05.505029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.039 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.040 "name": "raid_bdev1", 00:18:59.040 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:18:59.040 "strip_size_kb": 0, 00:18:59.040 "state": "online", 00:18:59.040 "raid_level": "raid1", 00:18:59.040 "superblock": false, 00:18:59.040 "num_base_bdevs": 2, 00:18:59.040 "num_base_bdevs_discovered": 2, 00:18:59.040 "num_base_bdevs_operational": 2, 00:18:59.040 "base_bdevs_list": [ 00:18:59.040 { 00:18:59.040 "name": "BaseBdev1", 00:18:59.040 "uuid": "c1d3caaf-bb9f-52d4-9d31-3d1dc39a20c2", 00:18:59.040 "is_configured": true, 00:18:59.040 "data_offset": 0, 00:18:59.040 "data_size": 65536 00:18:59.040 }, 00:18:59.040 { 00:18:59.040 "name": "BaseBdev2", 00:18:59.040 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:18:59.040 "is_configured": true, 00:18:59.040 "data_offset": 0, 00:18:59.040 "data_size": 65536 00:18:59.040 } 00:18:59.040 ] 00:18:59.040 }' 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.040 13:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.606 [2024-12-06 13:13:06.045689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:59.606 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.865 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:00.124 [2024-12-06 13:13:06.473511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:00.124 /dev/nbd0 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.124 1+0 records in 00:19:00.124 1+0 records out 00:19:00.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359047 s, 11.4 MB/s 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:00.124 13:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:06.688 65536+0 records in 00:19:06.688 65536+0 records out 00:19:06.688 33554432 bytes (34 MB, 32 MiB) copied, 6.42817 s, 5.2 MB/s 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.688 13:13:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:06.946 [2024-12-06 13:13:13.221055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.946 [2024-12-06 13:13:13.252045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.946 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.947 "name": "raid_bdev1", 00:19:06.947 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:06.947 "strip_size_kb": 0, 00:19:06.947 "state": "online", 00:19:06.947 "raid_level": "raid1", 00:19:06.947 "superblock": false, 00:19:06.947 "num_base_bdevs": 2, 00:19:06.947 "num_base_bdevs_discovered": 1, 00:19:06.947 "num_base_bdevs_operational": 1, 00:19:06.947 "base_bdevs_list": [ 00:19:06.947 { 00:19:06.947 "name": null, 00:19:06.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.947 "is_configured": false, 00:19:06.947 "data_offset": 0, 00:19:06.947 "data_size": 65536 00:19:06.947 }, 00:19:06.947 { 00:19:06.947 "name": "BaseBdev2", 00:19:06.947 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:06.947 "is_configured": true, 00:19:06.947 "data_offset": 0, 00:19:06.947 "data_size": 65536 00:19:06.947 } 00:19:06.947 ] 00:19:06.947 }' 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.947 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.513 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:07.513 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.513 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.513 [2024-12-06 13:13:13.752232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:07.513 [2024-12-06 13:13:13.770933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:19:07.513 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.513 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:07.513 [2024-12-06 13:13:13.773978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.447 "name": "raid_bdev1", 00:19:08.447 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:08.447 "strip_size_kb": 0, 00:19:08.447 "state": "online", 00:19:08.447 "raid_level": "raid1", 00:19:08.447 "superblock": false, 00:19:08.447 "num_base_bdevs": 2, 00:19:08.447 "num_base_bdevs_discovered": 2, 00:19:08.447 "num_base_bdevs_operational": 2, 00:19:08.447 "process": { 00:19:08.447 "type": "rebuild", 00:19:08.447 "target": "spare", 00:19:08.447 "progress": { 00:19:08.447 "blocks": 20480, 00:19:08.447 "percent": 31 00:19:08.447 } 00:19:08.447 }, 00:19:08.447 "base_bdevs_list": [ 00:19:08.447 { 00:19:08.447 "name": "spare", 00:19:08.447 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:08.447 "is_configured": true, 00:19:08.447 "data_offset": 0, 00:19:08.447 "data_size": 65536 00:19:08.447 }, 00:19:08.447 { 00:19:08.447 "name": "BaseBdev2", 00:19:08.447 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:08.447 "is_configured": true, 00:19:08.447 "data_offset": 0, 00:19:08.447 "data_size": 65536 00:19:08.447 } 00:19:08.447 ] 00:19:08.447 }' 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.447 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.447 [2024-12-06 13:13:14.932183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.705 [2024-12-06 13:13:14.985068] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:08.705 [2024-12-06 13:13:14.985365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.705 [2024-12-06 13:13:14.985395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.705 [2024-12-06 13:13:14.985419] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.705 "name": "raid_bdev1", 00:19:08.705 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:08.705 "strip_size_kb": 0, 00:19:08.705 "state": "online", 00:19:08.705 "raid_level": "raid1", 00:19:08.705 "superblock": false, 00:19:08.705 "num_base_bdevs": 2, 00:19:08.705 "num_base_bdevs_discovered": 1, 00:19:08.705 "num_base_bdevs_operational": 1, 00:19:08.705 "base_bdevs_list": [ 00:19:08.705 { 00:19:08.705 "name": null, 00:19:08.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.705 "is_configured": false, 00:19:08.705 "data_offset": 0, 00:19:08.705 "data_size": 65536 00:19:08.705 }, 00:19:08.705 { 00:19:08.705 "name": "BaseBdev2", 00:19:08.705 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:08.705 "is_configured": true, 00:19:08.705 "data_offset": 0, 00:19:08.705 "data_size": 65536 00:19:08.705 } 00:19:08.705 ] 00:19:08.705 }' 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.705 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.271 "name": "raid_bdev1", 00:19:09.271 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:09.271 "strip_size_kb": 0, 00:19:09.271 "state": "online", 00:19:09.271 "raid_level": "raid1", 00:19:09.271 "superblock": false, 00:19:09.271 "num_base_bdevs": 2, 00:19:09.271 "num_base_bdevs_discovered": 1, 00:19:09.271 "num_base_bdevs_operational": 1, 00:19:09.271 "base_bdevs_list": [ 00:19:09.271 { 00:19:09.271 "name": null, 00:19:09.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.271 "is_configured": false, 00:19:09.271 "data_offset": 0, 00:19:09.271 "data_size": 65536 00:19:09.271 }, 00:19:09.271 { 00:19:09.271 "name": "BaseBdev2", 00:19:09.271 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:09.271 "is_configured": true, 00:19:09.271 "data_offset": 0, 00:19:09.271 "data_size": 65536 00:19:09.271 } 00:19:09.271 ] 00:19:09.271 }' 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.271 [2024-12-06 13:13:15.707880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.271 [2024-12-06 13:13:15.723860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.271 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:09.271 [2024-12-06 13:13:15.726514] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.205 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.205 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.205 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.205 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.205 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.464 "name": "raid_bdev1", 00:19:10.464 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:10.464 "strip_size_kb": 0, 00:19:10.464 "state": "online", 00:19:10.464 "raid_level": "raid1", 00:19:10.464 "superblock": false, 00:19:10.464 "num_base_bdevs": 2, 00:19:10.464 "num_base_bdevs_discovered": 2, 00:19:10.464 "num_base_bdevs_operational": 2, 00:19:10.464 "process": { 00:19:10.464 "type": "rebuild", 00:19:10.464 "target": "spare", 00:19:10.464 "progress": { 00:19:10.464 "blocks": 20480, 00:19:10.464 "percent": 31 00:19:10.464 } 00:19:10.464 }, 00:19:10.464 "base_bdevs_list": [ 00:19:10.464 { 00:19:10.464 "name": "spare", 00:19:10.464 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:10.464 "is_configured": true, 00:19:10.464 "data_offset": 0, 00:19:10.464 "data_size": 65536 00:19:10.464 }, 00:19:10.464 { 00:19:10.464 "name": "BaseBdev2", 00:19:10.464 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:10.464 "is_configured": true, 00:19:10.464 "data_offset": 0, 00:19:10.464 "data_size": 65536 00:19:10.464 } 00:19:10.464 ] 00:19:10.464 }' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.464 "name": "raid_bdev1", 00:19:10.464 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:10.464 "strip_size_kb": 0, 00:19:10.464 "state": "online", 00:19:10.464 "raid_level": "raid1", 00:19:10.464 "superblock": false, 00:19:10.464 "num_base_bdevs": 2, 00:19:10.464 "num_base_bdevs_discovered": 2, 00:19:10.464 "num_base_bdevs_operational": 2, 00:19:10.464 "process": { 00:19:10.464 "type": "rebuild", 00:19:10.464 "target": "spare", 00:19:10.464 "progress": { 00:19:10.464 "blocks": 22528, 00:19:10.464 "percent": 34 00:19:10.464 } 00:19:10.464 }, 00:19:10.464 "base_bdevs_list": [ 00:19:10.464 { 00:19:10.464 "name": "spare", 00:19:10.464 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:10.464 "is_configured": true, 00:19:10.464 "data_offset": 0, 00:19:10.464 "data_size": 65536 00:19:10.464 }, 00:19:10.464 { 00:19:10.464 "name": "BaseBdev2", 00:19:10.464 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:10.464 "is_configured": true, 00:19:10.464 "data_offset": 0, 00:19:10.464 "data_size": 65536 00:19:10.464 } 00:19:10.464 ] 00:19:10.464 }' 00:19:10.464 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.723 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.723 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.723 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.723 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.673 "name": "raid_bdev1", 00:19:11.673 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:11.673 "strip_size_kb": 0, 00:19:11.673 "state": "online", 00:19:11.673 "raid_level": "raid1", 00:19:11.673 "superblock": false, 00:19:11.673 "num_base_bdevs": 2, 00:19:11.673 "num_base_bdevs_discovered": 2, 00:19:11.673 "num_base_bdevs_operational": 2, 00:19:11.673 "process": { 00:19:11.673 "type": "rebuild", 00:19:11.673 "target": "spare", 00:19:11.673 "progress": { 00:19:11.673 "blocks": 47104, 00:19:11.673 "percent": 71 00:19:11.673 } 00:19:11.673 }, 00:19:11.673 "base_bdevs_list": [ 00:19:11.673 { 00:19:11.673 "name": "spare", 00:19:11.673 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:11.673 "is_configured": true, 00:19:11.673 "data_offset": 0, 00:19:11.673 "data_size": 65536 00:19:11.673 }, 00:19:11.673 { 00:19:11.673 "name": "BaseBdev2", 00:19:11.673 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:11.673 "is_configured": true, 00:19:11.673 "data_offset": 0, 00:19:11.673 "data_size": 65536 00:19:11.673 } 00:19:11.673 ] 00:19:11.673 }' 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.673 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.932 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.932 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.499 [2024-12-06 13:13:18.957091] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:12.499 [2024-12-06 13:13:18.957231] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:12.499 [2024-12-06 13:13:18.957317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.759 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.018 "name": "raid_bdev1", 00:19:13.018 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:13.018 "strip_size_kb": 0, 00:19:13.018 "state": "online", 00:19:13.018 "raid_level": "raid1", 00:19:13.018 "superblock": false, 00:19:13.018 "num_base_bdevs": 2, 00:19:13.018 "num_base_bdevs_discovered": 2, 00:19:13.018 "num_base_bdevs_operational": 2, 00:19:13.018 "base_bdevs_list": [ 00:19:13.018 { 00:19:13.018 "name": "spare", 00:19:13.018 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:13.018 "is_configured": true, 00:19:13.018 "data_offset": 0, 00:19:13.018 "data_size": 65536 00:19:13.018 }, 00:19:13.018 { 00:19:13.018 "name": "BaseBdev2", 00:19:13.018 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:13.018 "is_configured": true, 00:19:13.018 "data_offset": 0, 00:19:13.018 "data_size": 65536 00:19:13.018 } 00:19:13.018 ] 00:19:13.018 }' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.018 "name": "raid_bdev1", 00:19:13.018 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:13.018 "strip_size_kb": 0, 00:19:13.018 "state": "online", 00:19:13.018 "raid_level": "raid1", 00:19:13.018 "superblock": false, 00:19:13.018 "num_base_bdevs": 2, 00:19:13.018 "num_base_bdevs_discovered": 2, 00:19:13.018 "num_base_bdevs_operational": 2, 00:19:13.018 "base_bdevs_list": [ 00:19:13.018 { 00:19:13.018 "name": "spare", 00:19:13.018 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:13.018 "is_configured": true, 00:19:13.018 "data_offset": 0, 00:19:13.018 "data_size": 65536 00:19:13.018 }, 00:19:13.018 { 00:19:13.018 "name": "BaseBdev2", 00:19:13.018 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:13.018 "is_configured": true, 00:19:13.018 "data_offset": 0, 00:19:13.018 "data_size": 65536 00:19:13.018 } 00:19:13.018 ] 00:19:13.018 }' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.018 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.277 "name": "raid_bdev1", 00:19:13.277 "uuid": "ce9714bb-e5b2-4460-bc4f-6eaa9185d4c6", 00:19:13.277 "strip_size_kb": 0, 00:19:13.277 "state": "online", 00:19:13.277 "raid_level": "raid1", 00:19:13.277 "superblock": false, 00:19:13.277 "num_base_bdevs": 2, 00:19:13.277 "num_base_bdevs_discovered": 2, 00:19:13.277 "num_base_bdevs_operational": 2, 00:19:13.277 "base_bdevs_list": [ 00:19:13.277 { 00:19:13.277 "name": "spare", 00:19:13.277 "uuid": "22078147-6679-5ea9-9866-87a6831e5121", 00:19:13.277 "is_configured": true, 00:19:13.277 "data_offset": 0, 00:19:13.277 "data_size": 65536 00:19:13.277 }, 00:19:13.277 { 00:19:13.277 "name": "BaseBdev2", 00:19:13.277 "uuid": "d29ee525-4e51-5970-8c02-32a1dc248b36", 00:19:13.277 "is_configured": true, 00:19:13.277 "data_offset": 0, 00:19:13.277 "data_size": 65536 00:19:13.277 } 00:19:13.277 ] 00:19:13.277 }' 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.277 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.536 [2024-12-06 13:13:20.031435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.536 [2024-12-06 13:13:20.031512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.536 [2024-12-06 13:13:20.031630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.536 [2024-12-06 13:13:20.031731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.536 [2024-12-06 13:13:20.031747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.536 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.795 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:13.796 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.796 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:13.796 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.796 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.796 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:14.054 /dev/nbd0 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.055 1+0 records in 00:19:14.055 1+0 records out 00:19:14.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025965 s, 15.8 MB/s 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.055 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:14.313 /dev/nbd1 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:14.313 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.314 1+0 records in 00:19:14.314 1+0 records out 00:19:14.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525076 s, 7.8 MB/s 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.314 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.572 13:13:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.831 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75768 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75768 ']' 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75768 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75768 00:19:15.090 killing process with pid 75768 00:19:15.090 Received shutdown signal, test time was about 60.000000 seconds 00:19:15.090 00:19:15.090 Latency(us) 00:19:15.090 [2024-12-06T13:13:21.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.090 [2024-12-06T13:13:21.619Z] =================================================================================================================== 00:19:15.090 [2024-12-06T13:13:21.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75768' 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75768 00:19:15.090 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75768 00:19:15.090 [2024-12-06 13:13:21.615932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.691 [2024-12-06 13:13:21.907331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:16.625 00:19:16.625 real 0m18.860s 00:19:16.625 user 0m20.947s 00:19:16.625 sys 0m3.455s 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.625 ************************************ 00:19:16.625 END TEST raid_rebuild_test 00:19:16.625 ************************************ 00:19:16.625 13:13:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:19:16.625 13:13:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:16.625 13:13:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.625 13:13:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.625 ************************************ 00:19:16.625 START TEST raid_rebuild_test_sb 00:19:16.625 ************************************ 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76215 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76215 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76215 ']' 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.625 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.884 [2024-12-06 13:13:23.219310] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:16.884 [2024-12-06 13:13:23.219788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76215 ] 00:19:16.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:16.884 Zero copy mechanism will not be used. 00:19:16.884 [2024-12-06 13:13:23.396677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.141 [2024-12-06 13:13:23.548089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.399 [2024-12-06 13:13:23.771103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.399 [2024-12-06 13:13:23.771462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 BaseBdev1_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 [2024-12-06 13:13:24.324192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.966 [2024-12-06 13:13:24.324454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.966 [2024-12-06 13:13:24.324515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.966 [2024-12-06 13:13:24.324540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.966 [2024-12-06 13:13:24.327672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.966 [2024-12-06 13:13:24.327723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.966 BaseBdev1 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 BaseBdev2_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 [2024-12-06 13:13:24.379647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:17.966 [2024-12-06 13:13:24.379730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.966 [2024-12-06 13:13:24.379763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.966 [2024-12-06 13:13:24.379783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.966 [2024-12-06 13:13:24.382908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.966 [2024-12-06 13:13:24.383105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:17.966 BaseBdev2 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 spare_malloc 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 spare_delay 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 [2024-12-06 13:13:24.457980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.966 [2024-12-06 13:13:24.458216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.966 [2024-12-06 13:13:24.458262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:17.966 [2024-12-06 13:13:24.458285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.966 [2024-12-06 13:13:24.461359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.966 spare 00:19:17.966 [2024-12-06 13:13:24.461545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 [2024-12-06 13:13:24.466370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.966 [2024-12-06 13:13:24.469056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.966 [2024-12-06 13:13:24.469520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:17.966 [2024-12-06 13:13:24.469552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:17.966 [2024-12-06 13:13:24.469933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:17.966 [2024-12-06 13:13:24.470183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:17.966 [2024-12-06 13:13:24.470213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:17.966 [2024-12-06 13:13:24.470535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.966 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.226 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.226 "name": "raid_bdev1", 00:19:18.226 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:18.226 "strip_size_kb": 0, 00:19:18.226 "state": "online", 00:19:18.226 "raid_level": "raid1", 00:19:18.226 "superblock": true, 00:19:18.226 "num_base_bdevs": 2, 00:19:18.226 "num_base_bdevs_discovered": 2, 00:19:18.226 "num_base_bdevs_operational": 2, 00:19:18.226 "base_bdevs_list": [ 00:19:18.226 { 00:19:18.226 "name": "BaseBdev1", 00:19:18.226 "uuid": "ebd76bb9-f1c6-50a6-bf58-3ef210ebaaa5", 00:19:18.226 "is_configured": true, 00:19:18.226 "data_offset": 2048, 00:19:18.226 "data_size": 63488 00:19:18.226 }, 00:19:18.226 { 00:19:18.226 "name": "BaseBdev2", 00:19:18.226 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:18.226 "is_configured": true, 00:19:18.226 "data_offset": 2048, 00:19:18.226 "data_size": 63488 00:19:18.226 } 00:19:18.226 ] 00:19:18.226 }' 00:19:18.226 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.226 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.484 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:18.484 13:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.484 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.484 13:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.484 [2024-12-06 13:13:24.999056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.743 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:19.002 [2024-12-06 13:13:25.414904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:19.002 /dev/nbd0 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:19.002 1+0 records in 00:19:19.002 1+0 records out 00:19:19.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048302 s, 8.5 MB/s 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:19.002 13:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:25.683 63488+0 records in 00:19:25.683 63488+0 records out 00:19:25.683 32505856 bytes (33 MB, 31 MiB) copied, 6.58717 s, 4.9 MB/s 00:19:25.683 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:25.683 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.683 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.684 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.684 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:25.684 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.684 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.963 [2024-12-06 13:13:32.392340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.963 [2024-12-06 13:13:32.408448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.963 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.963 "name": "raid_bdev1", 00:19:25.963 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:25.963 "strip_size_kb": 0, 00:19:25.963 "state": "online", 00:19:25.963 "raid_level": "raid1", 00:19:25.963 "superblock": true, 00:19:25.963 "num_base_bdevs": 2, 00:19:25.963 "num_base_bdevs_discovered": 1, 00:19:25.963 "num_base_bdevs_operational": 1, 00:19:25.963 "base_bdevs_list": [ 00:19:25.964 { 00:19:25.964 "name": null, 00:19:25.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.964 "is_configured": false, 00:19:25.964 "data_offset": 0, 00:19:25.964 "data_size": 63488 00:19:25.964 }, 00:19:25.964 { 00:19:25.964 "name": "BaseBdev2", 00:19:25.964 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:25.964 "is_configured": true, 00:19:25.964 "data_offset": 2048, 00:19:25.964 "data_size": 63488 00:19:25.964 } 00:19:25.964 ] 00:19:25.964 }' 00:19:25.964 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.964 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.530 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.530 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.530 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.530 [2024-12-06 13:13:32.924772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.530 [2024-12-06 13:13:32.943577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:19:26.530 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.530 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:26.530 [2024-12-06 13:13:32.946317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.727 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.727 "name": "raid_bdev1", 00:19:27.727 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:27.727 "strip_size_kb": 0, 00:19:27.727 "state": "online", 00:19:27.727 "raid_level": "raid1", 00:19:27.727 "superblock": true, 00:19:27.727 "num_base_bdevs": 2, 00:19:27.727 "num_base_bdevs_discovered": 2, 00:19:27.727 "num_base_bdevs_operational": 2, 00:19:27.727 "process": { 00:19:27.727 "type": "rebuild", 00:19:27.727 "target": "spare", 00:19:27.727 "progress": { 00:19:27.727 "blocks": 20480, 00:19:27.727 "percent": 32 00:19:27.727 } 00:19:27.727 }, 00:19:27.727 "base_bdevs_list": [ 00:19:27.727 { 00:19:27.727 "name": "spare", 00:19:27.727 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:27.727 "is_configured": true, 00:19:27.727 "data_offset": 2048, 00:19:27.727 "data_size": 63488 00:19:27.727 }, 00:19:27.727 { 00:19:27.727 "name": "BaseBdev2", 00:19:27.727 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:27.727 "is_configured": true, 00:19:27.727 "data_offset": 2048, 00:19:27.727 "data_size": 63488 00:19:27.727 } 00:19:27.727 ] 00:19:27.727 }' 00:19:27.727 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 [2024-12-06 13:13:34.112842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.727 [2024-12-06 13:13:34.158722] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.727 [2024-12-06 13:13:34.159121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.727 [2024-12-06 13:13:34.159152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.727 [2024-12-06 13:13:34.159171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.727 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.727 "name": "raid_bdev1", 00:19:27.727 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:27.727 "strip_size_kb": 0, 00:19:27.727 "state": "online", 00:19:27.727 "raid_level": "raid1", 00:19:27.727 "superblock": true, 00:19:27.727 "num_base_bdevs": 2, 00:19:27.727 "num_base_bdevs_discovered": 1, 00:19:27.727 "num_base_bdevs_operational": 1, 00:19:27.728 "base_bdevs_list": [ 00:19:27.728 { 00:19:27.728 "name": null, 00:19:27.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.728 "is_configured": false, 00:19:27.728 "data_offset": 0, 00:19:27.728 "data_size": 63488 00:19:27.728 }, 00:19:27.728 { 00:19:27.728 "name": "BaseBdev2", 00:19:27.728 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:27.728 "is_configured": true, 00:19:27.728 "data_offset": 2048, 00:19:27.728 "data_size": 63488 00:19:27.728 } 00:19:27.728 ] 00:19:27.728 }' 00:19:27.986 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.986 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.245 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.503 "name": "raid_bdev1", 00:19:28.503 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:28.503 "strip_size_kb": 0, 00:19:28.503 "state": "online", 00:19:28.503 "raid_level": "raid1", 00:19:28.503 "superblock": true, 00:19:28.503 "num_base_bdevs": 2, 00:19:28.503 "num_base_bdevs_discovered": 1, 00:19:28.503 "num_base_bdevs_operational": 1, 00:19:28.503 "base_bdevs_list": [ 00:19:28.503 { 00:19:28.503 "name": null, 00:19:28.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.503 "is_configured": false, 00:19:28.503 "data_offset": 0, 00:19:28.503 "data_size": 63488 00:19:28.503 }, 00:19:28.503 { 00:19:28.503 "name": "BaseBdev2", 00:19:28.503 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:28.503 "is_configured": true, 00:19:28.503 "data_offset": 2048, 00:19:28.503 "data_size": 63488 00:19:28.503 } 00:19:28.503 ] 00:19:28.503 }' 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.503 [2024-12-06 13:13:34.933634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.503 [2024-12-06 13:13:34.952706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.503 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:28.503 [2024-12-06 13:13:34.955904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.439 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.699 "name": "raid_bdev1", 00:19:29.699 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:29.699 "strip_size_kb": 0, 00:19:29.699 "state": "online", 00:19:29.699 "raid_level": "raid1", 00:19:29.699 "superblock": true, 00:19:29.699 "num_base_bdevs": 2, 00:19:29.699 "num_base_bdevs_discovered": 2, 00:19:29.699 "num_base_bdevs_operational": 2, 00:19:29.699 "process": { 00:19:29.699 "type": "rebuild", 00:19:29.699 "target": "spare", 00:19:29.699 "progress": { 00:19:29.699 "blocks": 20480, 00:19:29.699 "percent": 32 00:19:29.699 } 00:19:29.699 }, 00:19:29.699 "base_bdevs_list": [ 00:19:29.699 { 00:19:29.699 "name": "spare", 00:19:29.699 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:29.699 "is_configured": true, 00:19:29.699 "data_offset": 2048, 00:19:29.699 "data_size": 63488 00:19:29.699 }, 00:19:29.699 { 00:19:29.699 "name": "BaseBdev2", 00:19:29.699 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:29.699 "is_configured": true, 00:19:29.699 "data_offset": 2048, 00:19:29.699 "data_size": 63488 00:19:29.699 } 00:19:29.699 ] 00:19:29.699 }' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:29.699 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=428 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.699 "name": "raid_bdev1", 00:19:29.699 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:29.699 "strip_size_kb": 0, 00:19:29.699 "state": "online", 00:19:29.699 "raid_level": "raid1", 00:19:29.699 "superblock": true, 00:19:29.699 "num_base_bdevs": 2, 00:19:29.699 "num_base_bdevs_discovered": 2, 00:19:29.699 "num_base_bdevs_operational": 2, 00:19:29.699 "process": { 00:19:29.699 "type": "rebuild", 00:19:29.699 "target": "spare", 00:19:29.699 "progress": { 00:19:29.699 "blocks": 22528, 00:19:29.699 "percent": 35 00:19:29.699 } 00:19:29.699 }, 00:19:29.699 "base_bdevs_list": [ 00:19:29.699 { 00:19:29.699 "name": "spare", 00:19:29.699 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:29.699 "is_configured": true, 00:19:29.699 "data_offset": 2048, 00:19:29.699 "data_size": 63488 00:19:29.699 }, 00:19:29.699 { 00:19:29.699 "name": "BaseBdev2", 00:19:29.699 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:29.699 "is_configured": true, 00:19:29.699 "data_offset": 2048, 00:19:29.699 "data_size": 63488 00:19:29.699 } 00:19:29.699 ] 00:19:29.699 }' 00:19:29.699 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.011 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.011 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.011 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.011 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.987 "name": "raid_bdev1", 00:19:30.987 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:30.987 "strip_size_kb": 0, 00:19:30.987 "state": "online", 00:19:30.987 "raid_level": "raid1", 00:19:30.987 "superblock": true, 00:19:30.987 "num_base_bdevs": 2, 00:19:30.987 "num_base_bdevs_discovered": 2, 00:19:30.987 "num_base_bdevs_operational": 2, 00:19:30.987 "process": { 00:19:30.987 "type": "rebuild", 00:19:30.987 "target": "spare", 00:19:30.987 "progress": { 00:19:30.987 "blocks": 47104, 00:19:30.987 "percent": 74 00:19:30.987 } 00:19:30.987 }, 00:19:30.987 "base_bdevs_list": [ 00:19:30.987 { 00:19:30.987 "name": "spare", 00:19:30.987 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:30.987 "is_configured": true, 00:19:30.987 "data_offset": 2048, 00:19:30.987 "data_size": 63488 00:19:30.987 }, 00:19:30.987 { 00:19:30.987 "name": "BaseBdev2", 00:19:30.987 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:30.987 "is_configured": true, 00:19:30.987 "data_offset": 2048, 00:19:30.987 "data_size": 63488 00:19:30.987 } 00:19:30.987 ] 00:19:30.987 }' 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.987 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.922 [2024-12-06 13:13:38.086807] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:31.922 [2024-12-06 13:13:38.087259] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:31.922 [2024-12-06 13:13:38.087522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.181 "name": "raid_bdev1", 00:19:32.181 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:32.181 "strip_size_kb": 0, 00:19:32.181 "state": "online", 00:19:32.181 "raid_level": "raid1", 00:19:32.181 "superblock": true, 00:19:32.181 "num_base_bdevs": 2, 00:19:32.181 "num_base_bdevs_discovered": 2, 00:19:32.181 "num_base_bdevs_operational": 2, 00:19:32.181 "base_bdevs_list": [ 00:19:32.181 { 00:19:32.181 "name": "spare", 00:19:32.181 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:32.181 "is_configured": true, 00:19:32.181 "data_offset": 2048, 00:19:32.181 "data_size": 63488 00:19:32.181 }, 00:19:32.181 { 00:19:32.181 "name": "BaseBdev2", 00:19:32.181 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:32.181 "is_configured": true, 00:19:32.181 "data_offset": 2048, 00:19:32.181 "data_size": 63488 00:19:32.181 } 00:19:32.181 ] 00:19:32.181 }' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.181 "name": "raid_bdev1", 00:19:32.181 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:32.181 "strip_size_kb": 0, 00:19:32.181 "state": "online", 00:19:32.181 "raid_level": "raid1", 00:19:32.181 "superblock": true, 00:19:32.181 "num_base_bdevs": 2, 00:19:32.181 "num_base_bdevs_discovered": 2, 00:19:32.181 "num_base_bdevs_operational": 2, 00:19:32.181 "base_bdevs_list": [ 00:19:32.181 { 00:19:32.181 "name": "spare", 00:19:32.181 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:32.181 "is_configured": true, 00:19:32.181 "data_offset": 2048, 00:19:32.181 "data_size": 63488 00:19:32.181 }, 00:19:32.181 { 00:19:32.181 "name": "BaseBdev2", 00:19:32.181 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:32.181 "is_configured": true, 00:19:32.181 "data_offset": 2048, 00:19:32.181 "data_size": 63488 00:19:32.181 } 00:19:32.181 ] 00:19:32.181 }' 00:19:32.181 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.440 "name": "raid_bdev1", 00:19:32.440 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:32.440 "strip_size_kb": 0, 00:19:32.440 "state": "online", 00:19:32.440 "raid_level": "raid1", 00:19:32.440 "superblock": true, 00:19:32.440 "num_base_bdevs": 2, 00:19:32.440 "num_base_bdevs_discovered": 2, 00:19:32.440 "num_base_bdevs_operational": 2, 00:19:32.440 "base_bdevs_list": [ 00:19:32.440 { 00:19:32.440 "name": "spare", 00:19:32.440 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:32.440 "is_configured": true, 00:19:32.440 "data_offset": 2048, 00:19:32.440 "data_size": 63488 00:19:32.440 }, 00:19:32.440 { 00:19:32.440 "name": "BaseBdev2", 00:19:32.440 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:32.440 "is_configured": true, 00:19:32.440 "data_offset": 2048, 00:19:32.440 "data_size": 63488 00:19:32.440 } 00:19:32.440 ] 00:19:32.440 }' 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.440 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.007 [2024-12-06 13:13:39.319529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.007 [2024-12-06 13:13:39.319744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.007 [2024-12-06 13:13:39.319897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.007 [2024-12-06 13:13:39.320007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.007 [2024-12-06 13:13:39.320030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:33.007 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.008 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:33.267 /dev/nbd0 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.267 1+0 records in 00:19:33.267 1+0 records out 00:19:33.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044578 s, 9.2 MB/s 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.267 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:33.525 /dev/nbd1 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.525 1+0 records in 00:19:33.525 1+0 records out 00:19:33.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411125 s, 10.0 MB/s 00:19:33.525 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.526 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.784 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.043 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.302 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 [2024-12-06 13:13:40.833342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:34.572 [2024-12-06 13:13:40.833459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.572 [2024-12-06 13:13:40.833504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:34.572 [2024-12-06 13:13:40.833523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.572 [2024-12-06 13:13:40.836966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.572 [2024-12-06 13:13:40.837012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:34.572 [2024-12-06 13:13:40.837172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:34.572 [2024-12-06 13:13:40.837239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.572 [2024-12-06 13:13:40.837464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.572 spare 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.572 [2024-12-06 13:13:40.937712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:34.572 [2024-12-06 13:13:40.937799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:34.572 [2024-12-06 13:13:40.938300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:19:34.572 [2024-12-06 13:13:40.938646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:34.572 [2024-12-06 13:13:40.938664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:34.572 [2024-12-06 13:13:40.938950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.572 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.573 "name": "raid_bdev1", 00:19:34.573 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:34.573 "strip_size_kb": 0, 00:19:34.573 "state": "online", 00:19:34.573 "raid_level": "raid1", 00:19:34.573 "superblock": true, 00:19:34.573 "num_base_bdevs": 2, 00:19:34.573 "num_base_bdevs_discovered": 2, 00:19:34.573 "num_base_bdevs_operational": 2, 00:19:34.573 "base_bdevs_list": [ 00:19:34.573 { 00:19:34.573 "name": "spare", 00:19:34.573 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:34.573 "is_configured": true, 00:19:34.573 "data_offset": 2048, 00:19:34.573 "data_size": 63488 00:19:34.573 }, 00:19:34.573 { 00:19:34.573 "name": "BaseBdev2", 00:19:34.573 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:34.573 "is_configured": true, 00:19:34.573 "data_offset": 2048, 00:19:34.573 "data_size": 63488 00:19:34.573 } 00:19:34.573 ] 00:19:34.573 }' 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.573 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.149 "name": "raid_bdev1", 00:19:35.149 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:35.149 "strip_size_kb": 0, 00:19:35.149 "state": "online", 00:19:35.149 "raid_level": "raid1", 00:19:35.149 "superblock": true, 00:19:35.149 "num_base_bdevs": 2, 00:19:35.149 "num_base_bdevs_discovered": 2, 00:19:35.149 "num_base_bdevs_operational": 2, 00:19:35.149 "base_bdevs_list": [ 00:19:35.149 { 00:19:35.149 "name": "spare", 00:19:35.149 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:35.149 "is_configured": true, 00:19:35.149 "data_offset": 2048, 00:19:35.149 "data_size": 63488 00:19:35.149 }, 00:19:35.149 { 00:19:35.149 "name": "BaseBdev2", 00:19:35.149 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:35.149 "is_configured": true, 00:19:35.149 "data_offset": 2048, 00:19:35.149 "data_size": 63488 00:19:35.149 } 00:19:35.149 ] 00:19:35.149 }' 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.149 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 [2024-12-06 13:13:41.670017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.408 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.409 "name": "raid_bdev1", 00:19:35.409 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:35.409 "strip_size_kb": 0, 00:19:35.409 "state": "online", 00:19:35.409 "raid_level": "raid1", 00:19:35.409 "superblock": true, 00:19:35.409 "num_base_bdevs": 2, 00:19:35.409 "num_base_bdevs_discovered": 1, 00:19:35.409 "num_base_bdevs_operational": 1, 00:19:35.409 "base_bdevs_list": [ 00:19:35.409 { 00:19:35.409 "name": null, 00:19:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.409 "is_configured": false, 00:19:35.409 "data_offset": 0, 00:19:35.409 "data_size": 63488 00:19:35.409 }, 00:19:35.409 { 00:19:35.409 "name": "BaseBdev2", 00:19:35.409 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:35.409 "is_configured": true, 00:19:35.409 "data_offset": 2048, 00:19:35.409 "data_size": 63488 00:19:35.409 } 00:19:35.409 ] 00:19:35.409 }' 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.409 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:35.668 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.668 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.668 [2024-12-06 13:13:42.186169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.668 [2024-12-06 13:13:42.186548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:35.668 [2024-12-06 13:13:42.186581] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:35.668 [2024-12-06 13:13:42.186641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.926 [2024-12-06 13:13:42.202728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:19:35.926 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.926 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:35.926 [2024-12-06 13:13:42.205873] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.864 "name": "raid_bdev1", 00:19:36.864 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:36.864 "strip_size_kb": 0, 00:19:36.864 "state": "online", 00:19:36.864 "raid_level": "raid1", 00:19:36.864 "superblock": true, 00:19:36.864 "num_base_bdevs": 2, 00:19:36.864 "num_base_bdevs_discovered": 2, 00:19:36.864 "num_base_bdevs_operational": 2, 00:19:36.864 "process": { 00:19:36.864 "type": "rebuild", 00:19:36.864 "target": "spare", 00:19:36.864 "progress": { 00:19:36.864 "blocks": 20480, 00:19:36.864 "percent": 32 00:19:36.864 } 00:19:36.864 }, 00:19:36.864 "base_bdevs_list": [ 00:19:36.864 { 00:19:36.864 "name": "spare", 00:19:36.864 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:36.864 "is_configured": true, 00:19:36.864 "data_offset": 2048, 00:19:36.864 "data_size": 63488 00:19:36.864 }, 00:19:36.864 { 00:19:36.864 "name": "BaseBdev2", 00:19:36.864 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:36.864 "is_configured": true, 00:19:36.864 "data_offset": 2048, 00:19:36.864 "data_size": 63488 00:19:36.864 } 00:19:36.864 ] 00:19:36.864 }' 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.864 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.865 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.865 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.865 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:36.865 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.865 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.865 [2024-12-06 13:13:43.376349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.124 [2024-12-06 13:13:43.417084] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.124 [2024-12-06 13:13:43.417197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.124 [2024-12-06 13:13:43.417234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.124 [2024-12-06 13:13:43.417251] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.124 "name": "raid_bdev1", 00:19:37.124 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:37.124 "strip_size_kb": 0, 00:19:37.124 "state": "online", 00:19:37.124 "raid_level": "raid1", 00:19:37.124 "superblock": true, 00:19:37.124 "num_base_bdevs": 2, 00:19:37.124 "num_base_bdevs_discovered": 1, 00:19:37.124 "num_base_bdevs_operational": 1, 00:19:37.124 "base_bdevs_list": [ 00:19:37.124 { 00:19:37.124 "name": null, 00:19:37.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.124 "is_configured": false, 00:19:37.124 "data_offset": 0, 00:19:37.124 "data_size": 63488 00:19:37.124 }, 00:19:37.124 { 00:19:37.124 "name": "BaseBdev2", 00:19:37.124 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:37.124 "is_configured": true, 00:19:37.124 "data_offset": 2048, 00:19:37.124 "data_size": 63488 00:19:37.124 } 00:19:37.124 ] 00:19:37.124 }' 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.124 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.692 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:37.692 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.692 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.692 [2024-12-06 13:13:43.982887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:37.692 [2024-12-06 13:13:43.983277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.692 [2024-12-06 13:13:43.983326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:37.692 [2024-12-06 13:13:43.983347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.692 [2024-12-06 13:13:43.984148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.692 [2024-12-06 13:13:43.984197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:37.692 [2024-12-06 13:13:43.984331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:37.692 [2024-12-06 13:13:43.984358] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:37.692 [2024-12-06 13:13:43.984373] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:37.692 [2024-12-06 13:13:43.984423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.692 [2024-12-06 13:13:44.000506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:37.692 spare 00:19:37.692 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.692 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:37.692 [2024-12-06 13:13:44.003266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.636 "name": "raid_bdev1", 00:19:38.636 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:38.636 "strip_size_kb": 0, 00:19:38.636 "state": "online", 00:19:38.636 "raid_level": "raid1", 00:19:38.636 "superblock": true, 00:19:38.636 "num_base_bdevs": 2, 00:19:38.636 "num_base_bdevs_discovered": 2, 00:19:38.636 "num_base_bdevs_operational": 2, 00:19:38.636 "process": { 00:19:38.636 "type": "rebuild", 00:19:38.636 "target": "spare", 00:19:38.636 "progress": { 00:19:38.636 "blocks": 18432, 00:19:38.636 "percent": 29 00:19:38.636 } 00:19:38.636 }, 00:19:38.636 "base_bdevs_list": [ 00:19:38.636 { 00:19:38.636 "name": "spare", 00:19:38.636 "uuid": "65e17fe9-6b4a-57b4-8aa9-7ffba49c1b2a", 00:19:38.636 "is_configured": true, 00:19:38.636 "data_offset": 2048, 00:19:38.636 "data_size": 63488 00:19:38.636 }, 00:19:38.636 { 00:19:38.636 "name": "BaseBdev2", 00:19:38.636 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:38.636 "is_configured": true, 00:19:38.636 "data_offset": 2048, 00:19:38.636 "data_size": 63488 00:19:38.636 } 00:19:38.636 ] 00:19:38.636 }' 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.636 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.636 [2024-12-06 13:13:45.157081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.894 [2024-12-06 13:13:45.215310] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.894 [2024-12-06 13:13:45.215701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.894 [2024-12-06 13:13:45.215739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.894 [2024-12-06 13:13:45.215753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.894 "name": "raid_bdev1", 00:19:38.894 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:38.894 "strip_size_kb": 0, 00:19:38.894 "state": "online", 00:19:38.894 "raid_level": "raid1", 00:19:38.894 "superblock": true, 00:19:38.894 "num_base_bdevs": 2, 00:19:38.894 "num_base_bdevs_discovered": 1, 00:19:38.894 "num_base_bdevs_operational": 1, 00:19:38.894 "base_bdevs_list": [ 00:19:38.894 { 00:19:38.894 "name": null, 00:19:38.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.894 "is_configured": false, 00:19:38.894 "data_offset": 0, 00:19:38.894 "data_size": 63488 00:19:38.894 }, 00:19:38.894 { 00:19:38.894 "name": "BaseBdev2", 00:19:38.894 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:38.894 "is_configured": true, 00:19:38.894 "data_offset": 2048, 00:19:38.894 "data_size": 63488 00:19:38.894 } 00:19:38.894 ] 00:19:38.894 }' 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.894 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.460 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.461 "name": "raid_bdev1", 00:19:39.461 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:39.461 "strip_size_kb": 0, 00:19:39.461 "state": "online", 00:19:39.461 "raid_level": "raid1", 00:19:39.461 "superblock": true, 00:19:39.461 "num_base_bdevs": 2, 00:19:39.461 "num_base_bdevs_discovered": 1, 00:19:39.461 "num_base_bdevs_operational": 1, 00:19:39.461 "base_bdevs_list": [ 00:19:39.461 { 00:19:39.461 "name": null, 00:19:39.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.461 "is_configured": false, 00:19:39.461 "data_offset": 0, 00:19:39.461 "data_size": 63488 00:19:39.461 }, 00:19:39.461 { 00:19:39.461 "name": "BaseBdev2", 00:19:39.461 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:39.461 "is_configured": true, 00:19:39.461 "data_offset": 2048, 00:19:39.461 "data_size": 63488 00:19:39.461 } 00:19:39.461 ] 00:19:39.461 }' 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.461 [2024-12-06 13:13:45.933481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.461 [2024-12-06 13:13:45.933589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.461 [2024-12-06 13:13:45.933643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:39.461 [2024-12-06 13:13:45.933671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.461 [2024-12-06 13:13:45.934337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.461 [2024-12-06 13:13:45.934371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.461 [2024-12-06 13:13:45.934504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:39.461 [2024-12-06 13:13:45.934534] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:39.461 [2024-12-06 13:13:45.934552] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:39.461 [2024-12-06 13:13:45.934568] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:39.461 BaseBdev1 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.461 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.868 "name": "raid_bdev1", 00:19:40.868 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:40.868 "strip_size_kb": 0, 00:19:40.868 "state": "online", 00:19:40.868 "raid_level": "raid1", 00:19:40.868 "superblock": true, 00:19:40.868 "num_base_bdevs": 2, 00:19:40.868 "num_base_bdevs_discovered": 1, 00:19:40.868 "num_base_bdevs_operational": 1, 00:19:40.868 "base_bdevs_list": [ 00:19:40.868 { 00:19:40.868 "name": null, 00:19:40.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.868 "is_configured": false, 00:19:40.868 "data_offset": 0, 00:19:40.868 "data_size": 63488 00:19:40.868 }, 00:19:40.868 { 00:19:40.868 "name": "BaseBdev2", 00:19:40.868 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:40.868 "is_configured": true, 00:19:40.868 "data_offset": 2048, 00:19:40.868 "data_size": 63488 00:19:40.868 } 00:19:40.868 ] 00:19:40.868 }' 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.868 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.129 "name": "raid_bdev1", 00:19:41.129 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:41.129 "strip_size_kb": 0, 00:19:41.129 "state": "online", 00:19:41.129 "raid_level": "raid1", 00:19:41.129 "superblock": true, 00:19:41.129 "num_base_bdevs": 2, 00:19:41.129 "num_base_bdevs_discovered": 1, 00:19:41.129 "num_base_bdevs_operational": 1, 00:19:41.129 "base_bdevs_list": [ 00:19:41.129 { 00:19:41.129 "name": null, 00:19:41.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.129 "is_configured": false, 00:19:41.129 "data_offset": 0, 00:19:41.129 "data_size": 63488 00:19:41.129 }, 00:19:41.129 { 00:19:41.129 "name": "BaseBdev2", 00:19:41.129 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:41.129 "is_configured": true, 00:19:41.129 "data_offset": 2048, 00:19:41.129 "data_size": 63488 00:19:41.129 } 00:19:41.129 ] 00:19:41.129 }' 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.129 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.386 [2024-12-06 13:13:47.654333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.386 [2024-12-06 13:13:47.654642] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:41.386 [2024-12-06 13:13:47.654668] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:41.386 request: 00:19:41.386 { 00:19:41.386 "base_bdev": "BaseBdev1", 00:19:41.386 "raid_bdev": "raid_bdev1", 00:19:41.386 "method": "bdev_raid_add_base_bdev", 00:19:41.386 "req_id": 1 00:19:41.386 } 00:19:41.386 Got JSON-RPC error response 00:19:41.386 response: 00:19:41.386 { 00:19:41.386 "code": -22, 00:19:41.386 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:41.386 } 00:19:41.386 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.386 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:41.386 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.386 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.386 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.387 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.320 "name": "raid_bdev1", 00:19:42.320 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:42.320 "strip_size_kb": 0, 00:19:42.320 "state": "online", 00:19:42.320 "raid_level": "raid1", 00:19:42.320 "superblock": true, 00:19:42.320 "num_base_bdevs": 2, 00:19:42.320 "num_base_bdevs_discovered": 1, 00:19:42.320 "num_base_bdevs_operational": 1, 00:19:42.320 "base_bdevs_list": [ 00:19:42.320 { 00:19:42.320 "name": null, 00:19:42.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.320 "is_configured": false, 00:19:42.320 "data_offset": 0, 00:19:42.320 "data_size": 63488 00:19:42.320 }, 00:19:42.320 { 00:19:42.320 "name": "BaseBdev2", 00:19:42.320 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:42.320 "is_configured": true, 00:19:42.320 "data_offset": 2048, 00:19:42.320 "data_size": 63488 00:19:42.320 } 00:19:42.320 ] 00:19:42.320 }' 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.320 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.886 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.886 "name": "raid_bdev1", 00:19:42.886 "uuid": "905cd302-cef6-4128-b544-5f44275ff916", 00:19:42.886 "strip_size_kb": 0, 00:19:42.886 "state": "online", 00:19:42.886 "raid_level": "raid1", 00:19:42.886 "superblock": true, 00:19:42.886 "num_base_bdevs": 2, 00:19:42.886 "num_base_bdevs_discovered": 1, 00:19:42.886 "num_base_bdevs_operational": 1, 00:19:42.886 "base_bdevs_list": [ 00:19:42.886 { 00:19:42.886 "name": null, 00:19:42.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.886 "is_configured": false, 00:19:42.886 "data_offset": 0, 00:19:42.886 "data_size": 63488 00:19:42.887 }, 00:19:42.887 { 00:19:42.887 "name": "BaseBdev2", 00:19:42.887 "uuid": "32dc5a92-8a01-5ee4-bd80-11c538a3f0b3", 00:19:42.887 "is_configured": true, 00:19:42.887 "data_offset": 2048, 00:19:42.887 "data_size": 63488 00:19:42.887 } 00:19:42.887 ] 00:19:42.887 }' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76215 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76215 ']' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76215 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76215 00:19:42.887 killing process with pid 76215 00:19:42.887 Received shutdown signal, test time was about 60.000000 seconds 00:19:42.887 00:19:42.887 Latency(us) 00:19:42.887 [2024-12-06T13:13:49.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.887 [2024-12-06T13:13:49.416Z] =================================================================================================================== 00:19:42.887 [2024-12-06T13:13:49.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76215' 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76215 00:19:42.887 13:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76215 00:19:42.887 [2024-12-06 13:13:49.366442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.887 [2024-12-06 13:13:49.366674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.887 [2024-12-06 13:13:49.366780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.887 [2024-12-06 13:13:49.366805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:43.146 [2024-12-06 13:13:49.641307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.548 ************************************ 00:19:44.548 END TEST raid_rebuild_test_sb 00:19:44.548 ************************************ 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:44.548 00:19:44.548 real 0m27.648s 00:19:44.548 user 0m34.158s 00:19:44.548 sys 0m4.279s 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.548 13:13:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:44.548 13:13:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:44.548 13:13:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.548 13:13:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.548 ************************************ 00:19:44.548 START TEST raid_rebuild_test_io 00:19:44.548 ************************************ 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:44.548 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76994 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76994 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76994 ']' 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.549 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.549 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.549 Zero copy mechanism will not be used. 00:19:44.549 [2024-12-06 13:13:50.934857] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:44.549 [2024-12-06 13:13:50.935022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76994 ] 00:19:44.808 [2024-12-06 13:13:51.127424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.808 [2024-12-06 13:13:51.288829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.066 [2024-12-06 13:13:51.500567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.066 [2024-12-06 13:13:51.500664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 BaseBdev1_malloc 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 [2024-12-06 13:13:52.036636] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:45.644 [2024-12-06 13:13:52.036746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.644 [2024-12-06 13:13:52.036783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:45.644 [2024-12-06 13:13:52.036806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.644 [2024-12-06 13:13:52.039892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.644 [2024-12-06 13:13:52.039947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.644 BaseBdev1 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 BaseBdev2_malloc 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 [2024-12-06 13:13:52.093970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:45.644 [2024-12-06 13:13:52.094080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.644 [2024-12-06 13:13:52.094112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:45.644 [2024-12-06 13:13:52.094132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.644 [2024-12-06 13:13:52.097394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.644 [2024-12-06 13:13:52.097461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.644 BaseBdev2 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 spare_malloc 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 spare_delay 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.644 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.644 [2024-12-06 13:13:52.166160] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:45.644 [2024-12-06 13:13:52.166276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.644 [2024-12-06 13:13:52.166310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:45.644 [2024-12-06 13:13:52.166330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.644 [2024-12-06 13:13:52.169297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.644 [2024-12-06 13:13:52.169364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:45.903 spare 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.903 [2024-12-06 13:13:52.174255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.903 [2024-12-06 13:13:52.176870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.903 [2024-12-06 13:13:52.176993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:45.903 [2024-12-06 13:13:52.177015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:45.903 [2024-12-06 13:13:52.177312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:45.903 [2024-12-06 13:13:52.177569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:45.903 [2024-12-06 13:13:52.177589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:45.903 [2024-12-06 13:13:52.177780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.903 "name": "raid_bdev1", 00:19:45.903 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:45.903 "strip_size_kb": 0, 00:19:45.903 "state": "online", 00:19:45.903 "raid_level": "raid1", 00:19:45.903 "superblock": false, 00:19:45.903 "num_base_bdevs": 2, 00:19:45.903 "num_base_bdevs_discovered": 2, 00:19:45.903 "num_base_bdevs_operational": 2, 00:19:45.903 "base_bdevs_list": [ 00:19:45.903 { 00:19:45.903 "name": "BaseBdev1", 00:19:45.903 "uuid": "45d09512-b08d-5f26-93dd-250abef22532", 00:19:45.903 "is_configured": true, 00:19:45.903 "data_offset": 0, 00:19:45.903 "data_size": 65536 00:19:45.903 }, 00:19:45.903 { 00:19:45.903 "name": "BaseBdev2", 00:19:45.903 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:45.903 "is_configured": true, 00:19:45.903 "data_offset": 0, 00:19:45.903 "data_size": 65536 00:19:45.903 } 00:19:45.903 ] 00:19:45.903 }' 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.903 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:46.470 [2024-12-06 13:13:52.702910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.470 [2024-12-06 13:13:52.810511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.470 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.470 "name": "raid_bdev1", 00:19:46.470 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:46.470 "strip_size_kb": 0, 00:19:46.470 "state": "online", 00:19:46.470 "raid_level": "raid1", 00:19:46.470 "superblock": false, 00:19:46.470 "num_base_bdevs": 2, 00:19:46.470 "num_base_bdevs_discovered": 1, 00:19:46.470 "num_base_bdevs_operational": 1, 00:19:46.470 "base_bdevs_list": [ 00:19:46.470 { 00:19:46.470 "name": null, 00:19:46.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.470 "is_configured": false, 00:19:46.470 "data_offset": 0, 00:19:46.470 "data_size": 65536 00:19:46.470 }, 00:19:46.470 { 00:19:46.470 "name": "BaseBdev2", 00:19:46.471 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:46.471 "is_configured": true, 00:19:46.471 "data_offset": 0, 00:19:46.471 "data_size": 65536 00:19:46.471 } 00:19:46.471 ] 00:19:46.471 }' 00:19:46.471 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.471 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.471 [2024-12-06 13:13:52.940312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:46.471 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.471 Zero copy mechanism will not be used. 00:19:46.471 Running I/O for 60 seconds... 00:19:47.037 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.037 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.037 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.037 [2024-12-06 13:13:53.350926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.037 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.037 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:47.037 [2024-12-06 13:13:53.400505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:47.037 [2024-12-06 13:13:53.403123] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.037 [2024-12-06 13:13:53.505794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:47.037 [2024-12-06 13:13:53.506335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:47.296 [2024-12-06 13:13:53.734138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:47.296 [2024-12-06 13:13:53.734593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:47.815 163.00 IOPS, 489.00 MiB/s [2024-12-06T13:13:54.344Z] [2024-12-06 13:13:54.083320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:47.815 [2024-12-06 13:13:54.084085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:47.815 [2024-12-06 13:13:54.302370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:47.815 [2024-12-06 13:13:54.302787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.074 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.074 "name": "raid_bdev1", 00:19:48.074 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:48.074 "strip_size_kb": 0, 00:19:48.074 "state": "online", 00:19:48.074 "raid_level": "raid1", 00:19:48.074 "superblock": false, 00:19:48.074 "num_base_bdevs": 2, 00:19:48.074 "num_base_bdevs_discovered": 2, 00:19:48.075 "num_base_bdevs_operational": 2, 00:19:48.075 "process": { 00:19:48.075 "type": "rebuild", 00:19:48.075 "target": "spare", 00:19:48.075 "progress": { 00:19:48.075 "blocks": 10240, 00:19:48.075 "percent": 15 00:19:48.075 } 00:19:48.075 }, 00:19:48.075 "base_bdevs_list": [ 00:19:48.075 { 00:19:48.075 "name": "spare", 00:19:48.075 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:48.075 "is_configured": true, 00:19:48.075 "data_offset": 0, 00:19:48.075 "data_size": 65536 00:19:48.075 }, 00:19:48.075 { 00:19:48.075 "name": "BaseBdev2", 00:19:48.075 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:48.075 "is_configured": true, 00:19:48.075 "data_offset": 0, 00:19:48.075 "data_size": 65536 00:19:48.075 } 00:19:48.075 ] 00:19:48.075 }' 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.075 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.075 [2024-12-06 13:13:54.550637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.334 [2024-12-06 13:13:54.664956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:48.334 [2024-12-06 13:13:54.783056] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.334 [2024-12-06 13:13:54.801014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.334 [2024-12-06 13:13:54.801249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.334 [2024-12-06 13:13:54.801281] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.334 [2024-12-06 13:13:54.837172] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.334 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.335 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.593 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.593 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.593 "name": "raid_bdev1", 00:19:48.593 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:48.593 "strip_size_kb": 0, 00:19:48.593 "state": "online", 00:19:48.593 "raid_level": "raid1", 00:19:48.593 "superblock": false, 00:19:48.593 "num_base_bdevs": 2, 00:19:48.593 "num_base_bdevs_discovered": 1, 00:19:48.593 "num_base_bdevs_operational": 1, 00:19:48.593 "base_bdevs_list": [ 00:19:48.593 { 00:19:48.593 "name": null, 00:19:48.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.593 "is_configured": false, 00:19:48.593 "data_offset": 0, 00:19:48.593 "data_size": 65536 00:19:48.593 }, 00:19:48.593 { 00:19:48.593 "name": "BaseBdev2", 00:19:48.593 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:48.593 "is_configured": true, 00:19:48.593 "data_offset": 0, 00:19:48.593 "data_size": 65536 00:19:48.593 } 00:19:48.593 ] 00:19:48.593 }' 00:19:48.593 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.593 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.161 120.00 IOPS, 360.00 MiB/s [2024-12-06T13:13:55.690Z] 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.161 "name": "raid_bdev1", 00:19:49.161 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:49.161 "strip_size_kb": 0, 00:19:49.161 "state": "online", 00:19:49.161 "raid_level": "raid1", 00:19:49.161 "superblock": false, 00:19:49.161 "num_base_bdevs": 2, 00:19:49.161 "num_base_bdevs_discovered": 1, 00:19:49.161 "num_base_bdevs_operational": 1, 00:19:49.161 "base_bdevs_list": [ 00:19:49.161 { 00:19:49.161 "name": null, 00:19:49.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.161 "is_configured": false, 00:19:49.161 "data_offset": 0, 00:19:49.161 "data_size": 65536 00:19:49.161 }, 00:19:49.161 { 00:19:49.161 "name": "BaseBdev2", 00:19:49.161 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:49.161 "is_configured": true, 00:19:49.161 "data_offset": 0, 00:19:49.161 "data_size": 65536 00:19:49.161 } 00:19:49.161 ] 00:19:49.161 }' 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.161 [2024-12-06 13:13:55.569422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.161 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:49.161 [2024-12-06 13:13:55.626724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:49.161 [2024-12-06 13:13:55.629683] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.420 [2024-12-06 13:13:55.740879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.420 [2024-12-06 13:13:55.741576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:49.420 [2024-12-06 13:13:55.880426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.420 [2024-12-06 13:13:55.880807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.678 154.00 IOPS, 462.00 MiB/s [2024-12-06T13:13:56.207Z] [2024-12-06 13:13:56.122018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:49.938 [2024-12-06 13:13:56.265852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:50.197 [2024-12-06 13:13:56.531490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.197 "name": "raid_bdev1", 00:19:50.197 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:50.197 "strip_size_kb": 0, 00:19:50.197 "state": "online", 00:19:50.197 "raid_level": "raid1", 00:19:50.197 "superblock": false, 00:19:50.197 "num_base_bdevs": 2, 00:19:50.197 "num_base_bdevs_discovered": 2, 00:19:50.197 "num_base_bdevs_operational": 2, 00:19:50.197 "process": { 00:19:50.197 "type": "rebuild", 00:19:50.197 "target": "spare", 00:19:50.197 "progress": { 00:19:50.197 "blocks": 14336, 00:19:50.197 "percent": 21 00:19:50.197 } 00:19:50.197 }, 00:19:50.197 "base_bdevs_list": [ 00:19:50.197 { 00:19:50.197 "name": "spare", 00:19:50.197 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:50.197 "is_configured": true, 00:19:50.197 "data_offset": 0, 00:19:50.197 "data_size": 65536 00:19:50.197 }, 00:19:50.197 { 00:19:50.197 "name": "BaseBdev2", 00:19:50.197 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:50.197 "is_configured": true, 00:19:50.197 "data_offset": 0, 00:19:50.197 "data_size": 65536 00:19:50.197 } 00:19:50.197 ] 00:19:50.197 }' 00:19:50.197 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.457 [2024-12-06 13:13:56.783946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:50.457 [2024-12-06 13:13:56.784231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.457 "name": "raid_bdev1", 00:19:50.457 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:50.457 "strip_size_kb": 0, 00:19:50.457 "state": "online", 00:19:50.457 "raid_level": "raid1", 00:19:50.457 "superblock": false, 00:19:50.457 "num_base_bdevs": 2, 00:19:50.457 "num_base_bdevs_discovered": 2, 00:19:50.457 "num_base_bdevs_operational": 2, 00:19:50.457 "process": { 00:19:50.457 "type": "rebuild", 00:19:50.457 "target": "spare", 00:19:50.457 "progress": { 00:19:50.457 "blocks": 16384, 00:19:50.457 "percent": 25 00:19:50.457 } 00:19:50.457 }, 00:19:50.457 "base_bdevs_list": [ 00:19:50.457 { 00:19:50.457 "name": "spare", 00:19:50.457 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:50.457 "is_configured": true, 00:19:50.457 "data_offset": 0, 00:19:50.457 "data_size": 65536 00:19:50.457 }, 00:19:50.457 { 00:19:50.457 "name": "BaseBdev2", 00:19:50.457 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:50.457 "is_configured": true, 00:19:50.457 "data_offset": 0, 00:19:50.457 "data_size": 65536 00:19:50.457 } 00:19:50.457 ] 00:19:50.457 }' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.457 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.716 149.75 IOPS, 449.25 MiB/s [2024-12-06T13:13:57.245Z] [2024-12-06 13:13:57.045262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:50.716 [2024-12-06 13:13:57.046334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:50.975 [2024-12-06 13:13:57.285254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:51.233 [2024-12-06 13:13:57.650322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:51.491 [2024-12-06 13:13:57.895812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:51.491 128.40 IOPS, 385.20 MiB/s [2024-12-06T13:13:58.020Z] 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.491 13:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.491 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.491 "name": "raid_bdev1", 00:19:51.491 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:51.491 "strip_size_kb": 0, 00:19:51.491 "state": "online", 00:19:51.491 "raid_level": "raid1", 00:19:51.491 "superblock": false, 00:19:51.491 "num_base_bdevs": 2, 00:19:51.491 "num_base_bdevs_discovered": 2, 00:19:51.491 "num_base_bdevs_operational": 2, 00:19:51.491 "process": { 00:19:51.491 "type": "rebuild", 00:19:51.491 "target": "spare", 00:19:51.491 "progress": { 00:19:51.491 "blocks": 28672, 00:19:51.491 "percent": 43 00:19:51.491 } 00:19:51.491 }, 00:19:51.491 "base_bdevs_list": [ 00:19:51.491 { 00:19:51.491 "name": "spare", 00:19:51.491 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:51.491 "is_configured": true, 00:19:51.491 "data_offset": 0, 00:19:51.491 "data_size": 65536 00:19:51.491 }, 00:19:51.491 { 00:19:51.491 "name": "BaseBdev2", 00:19:51.491 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:51.491 "is_configured": true, 00:19:51.491 "data_offset": 0, 00:19:51.491 "data_size": 65536 00:19:51.491 } 00:19:51.491 ] 00:19:51.491 }' 00:19:51.750 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.750 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.750 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.750 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.750 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.750 [2024-12-06 13:13:58.271677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:52.318 [2024-12-06 13:13:58.622126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:52.318 [2024-12-06 13:13:58.762502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:52.577 117.50 IOPS, 352.50 MiB/s [2024-12-06T13:13:59.106Z] [2024-12-06 13:13:59.101235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.836 "name": "raid_bdev1", 00:19:52.836 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:52.836 "strip_size_kb": 0, 00:19:52.836 "state": "online", 00:19:52.836 "raid_level": "raid1", 00:19:52.836 "superblock": false, 00:19:52.836 "num_base_bdevs": 2, 00:19:52.836 "num_base_bdevs_discovered": 2, 00:19:52.836 "num_base_bdevs_operational": 2, 00:19:52.836 "process": { 00:19:52.836 "type": "rebuild", 00:19:52.836 "target": "spare", 00:19:52.836 "progress": { 00:19:52.836 "blocks": 47104, 00:19:52.836 "percent": 71 00:19:52.836 } 00:19:52.836 }, 00:19:52.836 "base_bdevs_list": [ 00:19:52.836 { 00:19:52.836 "name": "spare", 00:19:52.836 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:52.836 "is_configured": true, 00:19:52.836 "data_offset": 0, 00:19:52.836 "data_size": 65536 00:19:52.836 }, 00:19:52.836 { 00:19:52.836 "name": "BaseBdev2", 00:19:52.836 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:52.836 "is_configured": true, 00:19:52.836 "data_offset": 0, 00:19:52.836 "data_size": 65536 00:19:52.836 } 00:19:52.836 ] 00:19:52.836 }' 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.836 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.095 [2024-12-06 13:13:59.440671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:53.095 [2024-12-06 13:13:59.550590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:53.954 107.00 IOPS, 321.00 MiB/s [2024-12-06T13:14:00.483Z] [2024-12-06 13:14:00.233852] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.954 [2024-12-06 13:14:00.333763] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:53.954 [2024-12-06 13:14:00.336506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.954 "name": "raid_bdev1", 00:19:53.954 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:53.954 "strip_size_kb": 0, 00:19:53.954 "state": "online", 00:19:53.954 "raid_level": "raid1", 00:19:53.954 "superblock": false, 00:19:53.954 "num_base_bdevs": 2, 00:19:53.954 "num_base_bdevs_discovered": 2, 00:19:53.954 "num_base_bdevs_operational": 2, 00:19:53.954 "process": { 00:19:53.954 "type": "rebuild", 00:19:53.954 "target": "spare", 00:19:53.954 "progress": { 00:19:53.954 "blocks": 65536, 00:19:53.954 "percent": 100 00:19:53.954 } 00:19:53.954 }, 00:19:53.954 "base_bdevs_list": [ 00:19:53.954 { 00:19:53.954 "name": "spare", 00:19:53.954 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:53.954 "is_configured": true, 00:19:53.954 "data_offset": 0, 00:19:53.954 "data_size": 65536 00:19:53.954 }, 00:19:53.954 { 00:19:53.954 "name": "BaseBdev2", 00:19:53.954 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:53.954 "is_configured": true, 00:19:53.954 "data_offset": 0, 00:19:53.954 "data_size": 65536 00:19:53.954 } 00:19:53.954 ] 00:19:53.954 }' 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.954 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.089 97.38 IOPS, 292.12 MiB/s [2024-12-06T13:14:01.618Z] 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.089 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.089 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.089 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.089 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.089 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.090 "name": "raid_bdev1", 00:19:55.090 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:55.090 "strip_size_kb": 0, 00:19:55.090 "state": "online", 00:19:55.090 "raid_level": "raid1", 00:19:55.090 "superblock": false, 00:19:55.090 "num_base_bdevs": 2, 00:19:55.090 "num_base_bdevs_discovered": 2, 00:19:55.090 "num_base_bdevs_operational": 2, 00:19:55.090 "base_bdevs_list": [ 00:19:55.090 { 00:19:55.090 "name": "spare", 00:19:55.090 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:55.090 "is_configured": true, 00:19:55.090 "data_offset": 0, 00:19:55.090 "data_size": 65536 00:19:55.090 }, 00:19:55.090 { 00:19:55.090 "name": "BaseBdev2", 00:19:55.090 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:55.090 "is_configured": true, 00:19:55.090 "data_offset": 0, 00:19:55.090 "data_size": 65536 00:19:55.090 } 00:19:55.090 ] 00:19:55.090 }' 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.090 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.349 "name": "raid_bdev1", 00:19:55.349 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:55.349 "strip_size_kb": 0, 00:19:55.349 "state": "online", 00:19:55.349 "raid_level": "raid1", 00:19:55.349 "superblock": false, 00:19:55.349 "num_base_bdevs": 2, 00:19:55.349 "num_base_bdevs_discovered": 2, 00:19:55.349 "num_base_bdevs_operational": 2, 00:19:55.349 "base_bdevs_list": [ 00:19:55.349 { 00:19:55.349 "name": "spare", 00:19:55.349 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:55.349 "is_configured": true, 00:19:55.349 "data_offset": 0, 00:19:55.349 "data_size": 65536 00:19:55.349 }, 00:19:55.349 { 00:19:55.349 "name": "BaseBdev2", 00:19:55.349 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:55.349 "is_configured": true, 00:19:55.349 "data_offset": 0, 00:19:55.349 "data_size": 65536 00:19:55.349 } 00:19:55.349 ] 00:19:55.349 }' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.349 "name": "raid_bdev1", 00:19:55.349 "uuid": "577dfb26-e0f7-42d4-8b5f-41abc3a2d570", 00:19:55.349 "strip_size_kb": 0, 00:19:55.349 "state": "online", 00:19:55.349 "raid_level": "raid1", 00:19:55.349 "superblock": false, 00:19:55.349 "num_base_bdevs": 2, 00:19:55.349 "num_base_bdevs_discovered": 2, 00:19:55.349 "num_base_bdevs_operational": 2, 00:19:55.349 "base_bdevs_list": [ 00:19:55.349 { 00:19:55.349 "name": "spare", 00:19:55.349 "uuid": "80f29527-3496-5470-aa93-63259484005b", 00:19:55.349 "is_configured": true, 00:19:55.349 "data_offset": 0, 00:19:55.349 "data_size": 65536 00:19:55.349 }, 00:19:55.349 { 00:19:55.349 "name": "BaseBdev2", 00:19:55.349 "uuid": "b369b0bf-96a9-553c-b987-392b1b459cc6", 00:19:55.349 "is_configured": true, 00:19:55.349 "data_offset": 0, 00:19:55.349 "data_size": 65536 00:19:55.349 } 00:19:55.349 ] 00:19:55.349 }' 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.349 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.868 90.11 IOPS, 270.33 MiB/s [2024-12-06T13:14:02.397Z] 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.868 [2024-12-06 13:14:02.303561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.868 [2024-12-06 13:14:02.303600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.868 00:19:55.868 Latency(us) 00:19:55.868 [2024-12-06T13:14:02.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.868 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:55.868 raid_bdev1 : 9.38 88.74 266.22 0.00 0.00 15264.30 296.03 111530.36 00:19:55.868 [2024-12-06T13:14:02.397Z] =================================================================================================================== 00:19:55.868 [2024-12-06T13:14:02.397Z] Total : 88.74 266.22 0.00 0.00 15264.30 296.03 111530.36 00:19:55.868 [2024-12-06 13:14:02.338839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.868 { 00:19:55.868 "results": [ 00:19:55.868 { 00:19:55.868 "job": "raid_bdev1", 00:19:55.868 "core_mask": "0x1", 00:19:55.868 "workload": "randrw", 00:19:55.868 "percentage": 50, 00:19:55.868 "status": "finished", 00:19:55.868 "queue_depth": 2, 00:19:55.868 "io_size": 3145728, 00:19:55.868 "runtime": 9.375789, 00:19:55.868 "iops": 88.73919837573136, 00:19:55.868 "mibps": 266.2175951271941, 00:19:55.868 "io_failed": 0, 00:19:55.868 "io_timeout": 0, 00:19:55.868 "avg_latency_us": 15264.299860139861, 00:19:55.868 "min_latency_us": 296.0290909090909, 00:19:55.868 "max_latency_us": 111530.35636363637 00:19:55.868 } 00:19:55.868 ], 00:19:55.868 "core_count": 1 00:19:55.868 } 00:19:55.868 [2024-12-06 13:14:02.339197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.868 [2024-12-06 13:14:02.339334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.868 [2024-12-06 13:14:02.339362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:55.868 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.127 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:56.385 /dev/nbd0 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.385 1+0 records in 00:19:56.385 1+0 records out 00:19:56.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450362 s, 9.1 MB/s 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.385 13:14:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:56.642 /dev/nbd1 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.642 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.643 1+0 records in 00:19:56.643 1+0 records out 00:19:56.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072607 s, 5.6 MB/s 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.643 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.901 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.160 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76994 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76994 ']' 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76994 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76994 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.454 killing process with pid 76994 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76994' 00:19:57.454 Received shutdown signal, test time was about 10.981444 seconds 00:19:57.454 00:19:57.454 Latency(us) 00:19:57.454 [2024-12-06T13:14:03.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.454 [2024-12-06T13:14:03.983Z] =================================================================================================================== 00:19:57.454 [2024-12-06T13:14:03.983Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76994 00:19:57.454 [2024-12-06 13:14:03.925074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.454 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76994 00:19:57.720 [2024-12-06 13:14:04.140161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:59.111 13:14:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:59.111 00:19:59.111 real 0m14.495s 00:19:59.111 user 0m18.795s 00:19:59.111 sys 0m1.558s 00:19:59.111 13:14:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.111 13:14:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.111 ************************************ 00:19:59.111 END TEST raid_rebuild_test_io 00:19:59.111 ************************************ 00:19:59.111 13:14:05 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:59.111 13:14:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:59.111 13:14:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.111 13:14:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:59.111 ************************************ 00:19:59.111 START TEST raid_rebuild_test_sb_io 00:19:59.111 ************************************ 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77398 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77398 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77398 ']' 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.112 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.112 [2024-12-06 13:14:05.495239] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:19:59.112 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:59.112 Zero copy mechanism will not be used. 00:19:59.112 [2024-12-06 13:14:05.496074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77398 ] 00:19:59.369 [2024-12-06 13:14:05.697211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.369 [2024-12-06 13:14:05.895924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.935 [2024-12-06 13:14:06.163404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.935 [2024-12-06 13:14:06.163468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 BaseBdev1_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 [2024-12-06 13:14:06.538795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:00.194 [2024-12-06 13:14:06.538934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.194 [2024-12-06 13:14:06.538966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:00.194 [2024-12-06 13:14:06.538985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.194 [2024-12-06 13:14:06.541915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.194 [2024-12-06 13:14:06.541976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:00.194 BaseBdev1 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 BaseBdev2_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 [2024-12-06 13:14:06.601092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:00.194 [2024-12-06 13:14:06.601209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.194 [2024-12-06 13:14:06.601239] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:00.194 [2024-12-06 13:14:06.601260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.194 [2024-12-06 13:14:06.604415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.194 [2024-12-06 13:14:06.604509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:00.194 BaseBdev2 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 spare_malloc 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 spare_delay 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 [2024-12-06 13:14:06.687948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.194 [2024-12-06 13:14:06.688087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.194 [2024-12-06 13:14:06.688121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:00.194 [2024-12-06 13:14:06.688141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.194 [2024-12-06 13:14:06.691392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.194 [2024-12-06 13:14:06.691475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.194 spare 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 [2024-12-06 13:14:06.700220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.194 [2024-12-06 13:14:06.703150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.194 [2024-12-06 13:14:06.703423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:00.194 [2024-12-06 13:14:06.703474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.194 [2024-12-06 13:14:06.703832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:00.194 [2024-12-06 13:14:06.704141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:00.194 [2024-12-06 13:14:06.704168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:00.194 [2024-12-06 13:14:06.704419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.194 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.452 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.452 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.452 "name": "raid_bdev1", 00:20:00.452 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:00.452 "strip_size_kb": 0, 00:20:00.452 "state": "online", 00:20:00.452 "raid_level": "raid1", 00:20:00.452 "superblock": true, 00:20:00.452 "num_base_bdevs": 2, 00:20:00.452 "num_base_bdevs_discovered": 2, 00:20:00.452 "num_base_bdevs_operational": 2, 00:20:00.452 "base_bdevs_list": [ 00:20:00.452 { 00:20:00.452 "name": "BaseBdev1", 00:20:00.452 "uuid": "bf6007f3-7e3e-5317-bbff-200b81722e85", 00:20:00.452 "is_configured": true, 00:20:00.452 "data_offset": 2048, 00:20:00.452 "data_size": 63488 00:20:00.452 }, 00:20:00.452 { 00:20:00.452 "name": "BaseBdev2", 00:20:00.452 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:00.452 "is_configured": true, 00:20:00.452 "data_offset": 2048, 00:20:00.452 "data_size": 63488 00:20:00.452 } 00:20:00.452 ] 00:20:00.452 }' 00:20:00.452 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.452 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.711 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:00.711 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:00.711 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.711 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 [2024-12-06 13:14:07.241060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 [2024-12-06 13:14:07.340668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.970 "name": "raid_bdev1", 00:20:00.970 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:00.970 "strip_size_kb": 0, 00:20:00.970 "state": "online", 00:20:00.970 "raid_level": "raid1", 00:20:00.970 "superblock": true, 00:20:00.970 "num_base_bdevs": 2, 00:20:00.970 "num_base_bdevs_discovered": 1, 00:20:00.970 "num_base_bdevs_operational": 1, 00:20:00.970 "base_bdevs_list": [ 00:20:00.970 { 00:20:00.970 "name": null, 00:20:00.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.970 "is_configured": false, 00:20:00.970 "data_offset": 0, 00:20:00.970 "data_size": 63488 00:20:00.970 }, 00:20:00.970 { 00:20:00.970 "name": "BaseBdev2", 00:20:00.970 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:00.970 "is_configured": true, 00:20:00.970 "data_offset": 2048, 00:20:00.970 "data_size": 63488 00:20:00.970 } 00:20:00.970 ] 00:20:00.970 }' 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.970 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.970 [2024-12-06 13:14:07.473594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:00.970 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.970 Zero copy mechanism will not be used. 00:20:00.970 Running I/O for 60 seconds... 00:20:01.537 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:01.537 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.537 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.537 [2024-12-06 13:14:07.885153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.537 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.537 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:01.537 [2024-12-06 13:14:07.959269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:01.537 [2024-12-06 13:14:07.962101] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:01.811 [2024-12-06 13:14:08.072386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:01.811 [2024-12-06 13:14:08.073323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:01.811 [2024-12-06 13:14:08.199786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:01.811 [2024-12-06 13:14:08.200413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:02.082 135.00 IOPS, 405.00 MiB/s [2024-12-06T13:14:08.611Z] [2024-12-06 13:14:08.553300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:02.082 [2024-12-06 13:14:08.554273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:02.341 [2024-12-06 13:14:08.784868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.602 "name": "raid_bdev1", 00:20:02.602 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:02.602 "strip_size_kb": 0, 00:20:02.602 "state": "online", 00:20:02.602 "raid_level": "raid1", 00:20:02.602 "superblock": true, 00:20:02.602 "num_base_bdevs": 2, 00:20:02.602 "num_base_bdevs_discovered": 2, 00:20:02.602 "num_base_bdevs_operational": 2, 00:20:02.602 "process": { 00:20:02.602 "type": "rebuild", 00:20:02.602 "target": "spare", 00:20:02.602 "progress": { 00:20:02.602 "blocks": 12288, 00:20:02.602 "percent": 19 00:20:02.602 } 00:20:02.602 }, 00:20:02.602 "base_bdevs_list": [ 00:20:02.602 { 00:20:02.602 "name": "spare", 00:20:02.602 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:02.602 "is_configured": true, 00:20:02.602 "data_offset": 2048, 00:20:02.602 "data_size": 63488 00:20:02.602 }, 00:20:02.602 { 00:20:02.602 "name": "BaseBdev2", 00:20:02.602 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:02.602 "is_configured": true, 00:20:02.602 "data_offset": 2048, 00:20:02.602 "data_size": 63488 00:20:02.602 } 00:20:02.602 ] 00:20:02.602 }' 00:20:02.602 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.602 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.602 [2024-12-06 13:14:09.106777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.862 [2024-12-06 13:14:09.228344] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.862 [2024-12-06 13:14:09.250141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.862 [2024-12-06 13:14:09.250314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.862 [2024-12-06 13:14:09.250340] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.862 [2024-12-06 13:14:09.301250] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.862 "name": "raid_bdev1", 00:20:02.862 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:02.862 "strip_size_kb": 0, 00:20:02.862 "state": "online", 00:20:02.862 "raid_level": "raid1", 00:20:02.862 "superblock": true, 00:20:02.862 "num_base_bdevs": 2, 00:20:02.862 "num_base_bdevs_discovered": 1, 00:20:02.862 "num_base_bdevs_operational": 1, 00:20:02.862 "base_bdevs_list": [ 00:20:02.862 { 00:20:02.862 "name": null, 00:20:02.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.862 "is_configured": false, 00:20:02.862 "data_offset": 0, 00:20:02.862 "data_size": 63488 00:20:02.862 }, 00:20:02.862 { 00:20:02.862 "name": "BaseBdev2", 00:20:02.862 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:02.862 "is_configured": true, 00:20:02.862 "data_offset": 2048, 00:20:02.862 "data_size": 63488 00:20:02.862 } 00:20:02.862 ] 00:20:02.862 }' 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.862 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.379 101.50 IOPS, 304.50 MiB/s [2024-12-06T13:14:09.908Z] 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.379 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.379 "name": "raid_bdev1", 00:20:03.379 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:03.379 "strip_size_kb": 0, 00:20:03.379 "state": "online", 00:20:03.379 "raid_level": "raid1", 00:20:03.379 "superblock": true, 00:20:03.379 "num_base_bdevs": 2, 00:20:03.379 "num_base_bdevs_discovered": 1, 00:20:03.379 "num_base_bdevs_operational": 1, 00:20:03.379 "base_bdevs_list": [ 00:20:03.379 { 00:20:03.379 "name": null, 00:20:03.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.379 "is_configured": false, 00:20:03.379 "data_offset": 0, 00:20:03.379 "data_size": 63488 00:20:03.379 }, 00:20:03.380 { 00:20:03.380 "name": "BaseBdev2", 00:20:03.380 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:03.380 "is_configured": true, 00:20:03.380 "data_offset": 2048, 00:20:03.380 "data_size": 63488 00:20:03.380 } 00:20:03.380 ] 00:20:03.380 }' 00:20:03.380 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.638 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:03.638 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.638 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:03.638 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.638 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.638 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.639 [2024-12-06 13:14:10.031803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.639 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.639 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:03.639 [2024-12-06 13:14:10.105777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:03.639 [2024-12-06 13:14:10.108370] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.897 [2024-12-06 13:14:10.218167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:03.897 [2024-12-06 13:14:10.219186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:03.897 [2024-12-06 13:14:10.360130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:03.897 [2024-12-06 13:14:10.360712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:04.156 121.67 IOPS, 365.00 MiB/s [2024-12-06T13:14:10.685Z] [2024-12-06 13:14:10.602275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:04.724 [2024-12-06 13:14:10.982782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.724 "name": "raid_bdev1", 00:20:04.724 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:04.724 "strip_size_kb": 0, 00:20:04.724 "state": "online", 00:20:04.724 "raid_level": "raid1", 00:20:04.724 "superblock": true, 00:20:04.724 "num_base_bdevs": 2, 00:20:04.724 "num_base_bdevs_discovered": 2, 00:20:04.724 "num_base_bdevs_operational": 2, 00:20:04.724 "process": { 00:20:04.724 "type": "rebuild", 00:20:04.724 "target": "spare", 00:20:04.724 "progress": { 00:20:04.724 "blocks": 14336, 00:20:04.724 "percent": 22 00:20:04.724 } 00:20:04.724 }, 00:20:04.724 "base_bdevs_list": [ 00:20:04.724 { 00:20:04.724 "name": "spare", 00:20:04.724 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:04.724 "is_configured": true, 00:20:04.724 "data_offset": 2048, 00:20:04.724 "data_size": 63488 00:20:04.724 }, 00:20:04.724 { 00:20:04.724 "name": "BaseBdev2", 00:20:04.724 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:04.724 "is_configured": true, 00:20:04.724 "data_offset": 2048, 00:20:04.724 "data_size": 63488 00:20:04.724 } 00:20:04.724 ] 00:20:04.724 }' 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.724 [2024-12-06 13:14:11.220936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:04.724 [2024-12-06 13:14:11.221509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:04.724 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:04.724 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=463 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.984 "name": "raid_bdev1", 00:20:04.984 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:04.984 "strip_size_kb": 0, 00:20:04.984 "state": "online", 00:20:04.984 "raid_level": "raid1", 00:20:04.984 "superblock": true, 00:20:04.984 "num_base_bdevs": 2, 00:20:04.984 "num_base_bdevs_discovered": 2, 00:20:04.984 "num_base_bdevs_operational": 2, 00:20:04.984 "process": { 00:20:04.984 "type": "rebuild", 00:20:04.984 "target": "spare", 00:20:04.984 "progress": { 00:20:04.984 "blocks": 16384, 00:20:04.984 "percent": 25 00:20:04.984 } 00:20:04.984 }, 00:20:04.984 "base_bdevs_list": [ 00:20:04.984 { 00:20:04.984 "name": "spare", 00:20:04.984 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:04.984 "is_configured": true, 00:20:04.984 "data_offset": 2048, 00:20:04.984 "data_size": 63488 00:20:04.984 }, 00:20:04.984 { 00:20:04.984 "name": "BaseBdev2", 00:20:04.984 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:04.984 "is_configured": true, 00:20:04.984 "data_offset": 2048, 00:20:04.984 "data_size": 63488 00:20:04.984 } 00:20:04.984 ] 00:20:04.984 }' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.984 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:05.243 110.25 IOPS, 330.75 MiB/s [2024-12-06T13:14:11.772Z] [2024-12-06 13:14:11.617443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:05.502 [2024-12-06 13:14:11.934630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:05.775 [2024-12-06 13:14:12.139618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:06.042 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:06.042 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.043 [2024-12-06 13:14:12.467512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.043 "name": "raid_bdev1", 00:20:06.043 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:06.043 "strip_size_kb": 0, 00:20:06.043 "state": "online", 00:20:06.043 "raid_level": "raid1", 00:20:06.043 "superblock": true, 00:20:06.043 "num_base_bdevs": 2, 00:20:06.043 "num_base_bdevs_discovered": 2, 00:20:06.043 "num_base_bdevs_operational": 2, 00:20:06.043 "process": { 00:20:06.043 "type": "rebuild", 00:20:06.043 "target": "spare", 00:20:06.043 "progress": { 00:20:06.043 "blocks": 32768, 00:20:06.043 "percent": 51 00:20:06.043 } 00:20:06.043 }, 00:20:06.043 "base_bdevs_list": [ 00:20:06.043 { 00:20:06.043 "name": "spare", 00:20:06.043 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:06.043 "is_configured": true, 00:20:06.043 "data_offset": 2048, 00:20:06.043 "data_size": 63488 00:20:06.043 }, 00:20:06.043 { 00:20:06.043 "name": "BaseBdev2", 00:20:06.043 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:06.043 "is_configured": true, 00:20:06.043 "data_offset": 2048, 00:20:06.043 "data_size": 63488 00:20:06.043 } 00:20:06.043 ] 00:20:06.043 }' 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.043 99.20 IOPS, 297.60 MiB/s [2024-12-06T13:14:12.572Z] 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.043 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.301 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.301 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:06.302 [2024-12-06 13:14:12.795123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:06.868 [2024-12-06 13:14:13.108257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:06.869 [2024-12-06 13:14:13.231855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:07.127 89.67 IOPS, 269.00 MiB/s [2024-12-06T13:14:13.656Z] 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.127 [2024-12-06 13:14:13.594935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.127 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.127 "name": "raid_bdev1", 00:20:07.127 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:07.128 "strip_size_kb": 0, 00:20:07.128 "state": "online", 00:20:07.128 "raid_level": "raid1", 00:20:07.128 "superblock": true, 00:20:07.128 "num_base_bdevs": 2, 00:20:07.128 "num_base_bdevs_discovered": 2, 00:20:07.128 "num_base_bdevs_operational": 2, 00:20:07.128 "process": { 00:20:07.128 "type": "rebuild", 00:20:07.128 "target": "spare", 00:20:07.128 "progress": { 00:20:07.128 "blocks": 49152, 00:20:07.128 "percent": 77 00:20:07.128 } 00:20:07.128 }, 00:20:07.128 "base_bdevs_list": [ 00:20:07.128 { 00:20:07.128 "name": "spare", 00:20:07.128 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:07.128 "is_configured": true, 00:20:07.128 "data_offset": 2048, 00:20:07.128 "data_size": 63488 00:20:07.128 }, 00:20:07.128 { 00:20:07.128 "name": "BaseBdev2", 00:20:07.128 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:07.128 "is_configured": true, 00:20:07.128 "data_offset": 2048, 00:20:07.128 "data_size": 63488 00:20:07.128 } 00:20:07.128 ] 00:20:07.128 }' 00:20:07.128 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.386 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.386 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.386 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.386 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:07.386 [2024-12-06 13:14:13.827314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:07.952 [2024-12-06 13:14:14.178289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:08.211 81.00 IOPS, 243.00 MiB/s [2024-12-06T13:14:14.740Z] [2024-12-06 13:14:14.623888] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:08.211 [2024-12-06 13:14:14.730983] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:08.211 [2024-12-06 13:14:14.734665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.470 "name": "raid_bdev1", 00:20:08.470 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:08.470 "strip_size_kb": 0, 00:20:08.470 "state": "online", 00:20:08.470 "raid_level": "raid1", 00:20:08.470 "superblock": true, 00:20:08.470 "num_base_bdevs": 2, 00:20:08.470 "num_base_bdevs_discovered": 2, 00:20:08.470 "num_base_bdevs_operational": 2, 00:20:08.470 "base_bdevs_list": [ 00:20:08.470 { 00:20:08.470 "name": "spare", 00:20:08.470 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:08.470 "is_configured": true, 00:20:08.470 "data_offset": 2048, 00:20:08.470 "data_size": 63488 00:20:08.470 }, 00:20:08.470 { 00:20:08.470 "name": "BaseBdev2", 00:20:08.470 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:08.470 "is_configured": true, 00:20:08.470 "data_offset": 2048, 00:20:08.470 "data_size": 63488 00:20:08.470 } 00:20:08.470 ] 00:20:08.470 }' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.470 "name": "raid_bdev1", 00:20:08.470 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:08.470 "strip_size_kb": 0, 00:20:08.470 "state": "online", 00:20:08.470 "raid_level": "raid1", 00:20:08.470 "superblock": true, 00:20:08.470 "num_base_bdevs": 2, 00:20:08.470 "num_base_bdevs_discovered": 2, 00:20:08.470 "num_base_bdevs_operational": 2, 00:20:08.470 "base_bdevs_list": [ 00:20:08.470 { 00:20:08.470 "name": "spare", 00:20:08.470 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:08.470 "is_configured": true, 00:20:08.470 "data_offset": 2048, 00:20:08.470 "data_size": 63488 00:20:08.470 }, 00:20:08.470 { 00:20:08.470 "name": "BaseBdev2", 00:20:08.470 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:08.470 "is_configured": true, 00:20:08.470 "data_offset": 2048, 00:20:08.470 "data_size": 63488 00:20:08.470 } 00:20:08.470 ] 00:20:08.470 }' 00:20:08.470 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.730 "name": "raid_bdev1", 00:20:08.730 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:08.730 "strip_size_kb": 0, 00:20:08.730 "state": "online", 00:20:08.730 "raid_level": "raid1", 00:20:08.730 "superblock": true, 00:20:08.730 "num_base_bdevs": 2, 00:20:08.730 "num_base_bdevs_discovered": 2, 00:20:08.730 "num_base_bdevs_operational": 2, 00:20:08.730 "base_bdevs_list": [ 00:20:08.730 { 00:20:08.730 "name": "spare", 00:20:08.730 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:08.730 "is_configured": true, 00:20:08.730 "data_offset": 2048, 00:20:08.730 "data_size": 63488 00:20:08.730 }, 00:20:08.730 { 00:20:08.730 "name": "BaseBdev2", 00:20:08.730 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:08.730 "is_configured": true, 00:20:08.730 "data_offset": 2048, 00:20:08.730 "data_size": 63488 00:20:08.730 } 00:20:08.730 ] 00:20:08.730 }' 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.730 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.247 75.12 IOPS, 225.38 MiB/s [2024-12-06T13:14:15.776Z] 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.247 [2024-12-06 13:14:15.632693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.247 [2024-12-06 13:14:15.632773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.247 00:20:09.247 Latency(us) 00:20:09.247 [2024-12-06T13:14:15.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.247 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:09.247 raid_bdev1 : 8.24 74.48 223.44 0.00 0.00 18108.72 292.31 113913.48 00:20:09.247 [2024-12-06T13:14:15.776Z] =================================================================================================================== 00:20:09.247 [2024-12-06T13:14:15.776Z] Total : 74.48 223.44 0.00 0.00 18108.72 292.31 113913.48 00:20:09.247 [2024-12-06 13:14:15.743214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.247 [2024-12-06 13:14:15.743362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.247 [2024-12-06 13:14:15.743539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.247 [2024-12-06 13:14:15.743565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:09.247 { 00:20:09.247 "results": [ 00:20:09.247 { 00:20:09.247 "job": "raid_bdev1", 00:20:09.247 "core_mask": "0x1", 00:20:09.247 "workload": "randrw", 00:20:09.247 "percentage": 50, 00:20:09.247 "status": "finished", 00:20:09.247 "queue_depth": 2, 00:20:09.247 "io_size": 3145728, 00:20:09.247 "runtime": 8.243975, 00:20:09.247 "iops": 74.47863439663512, 00:20:09.247 "mibps": 223.43590318990533, 00:20:09.247 "io_failed": 0, 00:20:09.247 "io_timeout": 0, 00:20:09.247 "avg_latency_us": 18108.724382588094, 00:20:09.247 "min_latency_us": 292.30545454545455, 00:20:09.247 "max_latency_us": 113913.48363636364 00:20:09.247 } 00:20:09.247 ], 00:20:09.247 "core_count": 1 00:20:09.247 } 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.247 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.506 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:09.764 /dev/nbd0 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.764 1+0 records in 00:20:09.764 1+0 records out 00:20:09.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563208 s, 7.3 MB/s 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:09.764 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:10.022 /dev/nbd1 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:10.022 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:10.281 1+0 records in 00:20:10.281 1+0 records out 00:20:10.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428133 s, 9.6 MB/s 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.281 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:10.540 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.540 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.799 [2024-12-06 13:14:17.305386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:10.799 [2024-12-06 13:14:17.305525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.799 [2024-12-06 13:14:17.305568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:10.799 [2024-12-06 13:14:17.305588] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.799 [2024-12-06 13:14:17.309363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.799 [2024-12-06 13:14:17.309494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:10.799 [2024-12-06 13:14:17.309624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:10.799 [2024-12-06 13:14:17.309697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.799 [2024-12-06 13:14:17.309933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.799 spare 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.799 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.058 [2024-12-06 13:14:17.410092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:11.058 [2024-12-06 13:14:17.410160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:11.058 [2024-12-06 13:14:17.410630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:20:11.058 [2024-12-06 13:14:17.410885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:11.058 [2024-12-06 13:14:17.410912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:11.058 [2024-12-06 13:14:17.411142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.058 "name": "raid_bdev1", 00:20:11.058 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:11.058 "strip_size_kb": 0, 00:20:11.058 "state": "online", 00:20:11.058 "raid_level": "raid1", 00:20:11.058 "superblock": true, 00:20:11.058 "num_base_bdevs": 2, 00:20:11.058 "num_base_bdevs_discovered": 2, 00:20:11.058 "num_base_bdevs_operational": 2, 00:20:11.058 "base_bdevs_list": [ 00:20:11.058 { 00:20:11.058 "name": "spare", 00:20:11.058 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:11.058 "is_configured": true, 00:20:11.058 "data_offset": 2048, 00:20:11.058 "data_size": 63488 00:20:11.058 }, 00:20:11.058 { 00:20:11.058 "name": "BaseBdev2", 00:20:11.058 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:11.058 "is_configured": true, 00:20:11.058 "data_offset": 2048, 00:20:11.058 "data_size": 63488 00:20:11.058 } 00:20:11.058 ] 00:20:11.058 }' 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.058 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.626 "name": "raid_bdev1", 00:20:11.626 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:11.626 "strip_size_kb": 0, 00:20:11.626 "state": "online", 00:20:11.626 "raid_level": "raid1", 00:20:11.626 "superblock": true, 00:20:11.626 "num_base_bdevs": 2, 00:20:11.626 "num_base_bdevs_discovered": 2, 00:20:11.626 "num_base_bdevs_operational": 2, 00:20:11.626 "base_bdevs_list": [ 00:20:11.626 { 00:20:11.626 "name": "spare", 00:20:11.626 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:11.626 "is_configured": true, 00:20:11.626 "data_offset": 2048, 00:20:11.626 "data_size": 63488 00:20:11.626 }, 00:20:11.626 { 00:20:11.626 "name": "BaseBdev2", 00:20:11.626 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:11.626 "is_configured": true, 00:20:11.626 "data_offset": 2048, 00:20:11.626 "data_size": 63488 00:20:11.626 } 00:20:11.626 ] 00:20:11.626 }' 00:20:11.626 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:11.626 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.909 [2024-12-06 13:14:18.162313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.909 "name": "raid_bdev1", 00:20:11.909 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:11.909 "strip_size_kb": 0, 00:20:11.909 "state": "online", 00:20:11.909 "raid_level": "raid1", 00:20:11.909 "superblock": true, 00:20:11.909 "num_base_bdevs": 2, 00:20:11.909 "num_base_bdevs_discovered": 1, 00:20:11.909 "num_base_bdevs_operational": 1, 00:20:11.909 "base_bdevs_list": [ 00:20:11.909 { 00:20:11.909 "name": null, 00:20:11.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.909 "is_configured": false, 00:20:11.909 "data_offset": 0, 00:20:11.909 "data_size": 63488 00:20:11.909 }, 00:20:11.909 { 00:20:11.909 "name": "BaseBdev2", 00:20:11.909 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:11.909 "is_configured": true, 00:20:11.909 "data_offset": 2048, 00:20:11.909 "data_size": 63488 00:20:11.909 } 00:20:11.909 ] 00:20:11.909 }' 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.909 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.476 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:12.476 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.476 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.476 [2024-12-06 13:14:18.710558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.476 [2024-12-06 13:14:18.710886] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:12.476 [2024-12-06 13:14:18.710910] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:12.476 [2024-12-06 13:14:18.710969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.476 [2024-12-06 13:14:18.729341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:20:12.476 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.476 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:12.476 [2024-12-06 13:14:18.732310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.411 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.411 "name": "raid_bdev1", 00:20:13.411 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:13.411 "strip_size_kb": 0, 00:20:13.411 "state": "online", 00:20:13.411 "raid_level": "raid1", 00:20:13.411 "superblock": true, 00:20:13.411 "num_base_bdevs": 2, 00:20:13.411 "num_base_bdevs_discovered": 2, 00:20:13.411 "num_base_bdevs_operational": 2, 00:20:13.411 "process": { 00:20:13.411 "type": "rebuild", 00:20:13.411 "target": "spare", 00:20:13.411 "progress": { 00:20:13.411 "blocks": 20480, 00:20:13.411 "percent": 32 00:20:13.411 } 00:20:13.411 }, 00:20:13.411 "base_bdevs_list": [ 00:20:13.411 { 00:20:13.411 "name": "spare", 00:20:13.411 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:13.411 "is_configured": true, 00:20:13.411 "data_offset": 2048, 00:20:13.411 "data_size": 63488 00:20:13.411 }, 00:20:13.411 { 00:20:13.411 "name": "BaseBdev2", 00:20:13.411 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:13.411 "is_configured": true, 00:20:13.411 "data_offset": 2048, 00:20:13.412 "data_size": 63488 00:20:13.412 } 00:20:13.412 ] 00:20:13.412 }' 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.412 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.412 [2024-12-06 13:14:19.910585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.685 [2024-12-06 13:14:19.944176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.685 [2024-12-06 13:14:19.944338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.685 [2024-12-06 13:14:19.944369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.685 [2024-12-06 13:14:19.944382] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.685 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:13.686 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.686 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.686 "name": "raid_bdev1", 00:20:13.686 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:13.686 "strip_size_kb": 0, 00:20:13.686 "state": "online", 00:20:13.686 "raid_level": "raid1", 00:20:13.686 "superblock": true, 00:20:13.686 "num_base_bdevs": 2, 00:20:13.686 "num_base_bdevs_discovered": 1, 00:20:13.686 "num_base_bdevs_operational": 1, 00:20:13.686 "base_bdevs_list": [ 00:20:13.686 { 00:20:13.686 "name": null, 00:20:13.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.686 "is_configured": false, 00:20:13.686 "data_offset": 0, 00:20:13.686 "data_size": 63488 00:20:13.686 }, 00:20:13.686 { 00:20:13.686 "name": "BaseBdev2", 00:20:13.686 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:13.686 "is_configured": true, 00:20:13.686 "data_offset": 2048, 00:20:13.686 "data_size": 63488 00:20:13.686 } 00:20:13.686 ] 00:20:13.686 }' 00:20:13.686 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.686 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:14.252 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.252 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 [2024-12-06 13:14:20.523702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.252 [2024-12-06 13:14:20.523805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.252 [2024-12-06 13:14:20.523893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:14.252 [2024-12-06 13:14:20.523909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.252 [2024-12-06 13:14:20.524707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.252 [2024-12-06 13:14:20.524747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.252 [2024-12-06 13:14:20.524921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:14.252 [2024-12-06 13:14:20.524941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:14.252 [2024-12-06 13:14:20.524960] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:14.252 [2024-12-06 13:14:20.525001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.252 [2024-12-06 13:14:20.542155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:20:14.252 spare 00:20:14.252 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.252 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:14.252 [2024-12-06 13:14:20.544897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.188 "name": "raid_bdev1", 00:20:15.188 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:15.188 "strip_size_kb": 0, 00:20:15.188 "state": "online", 00:20:15.188 "raid_level": "raid1", 00:20:15.188 "superblock": true, 00:20:15.188 "num_base_bdevs": 2, 00:20:15.188 "num_base_bdevs_discovered": 2, 00:20:15.188 "num_base_bdevs_operational": 2, 00:20:15.188 "process": { 00:20:15.188 "type": "rebuild", 00:20:15.188 "target": "spare", 00:20:15.188 "progress": { 00:20:15.188 "blocks": 20480, 00:20:15.188 "percent": 32 00:20:15.188 } 00:20:15.188 }, 00:20:15.188 "base_bdevs_list": [ 00:20:15.188 { 00:20:15.188 "name": "spare", 00:20:15.188 "uuid": "e4c503de-d13c-5cb4-825a-5f04d7ee0dc9", 00:20:15.188 "is_configured": true, 00:20:15.188 "data_offset": 2048, 00:20:15.188 "data_size": 63488 00:20:15.188 }, 00:20:15.188 { 00:20:15.188 "name": "BaseBdev2", 00:20:15.188 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:15.188 "is_configured": true, 00:20:15.188 "data_offset": 2048, 00:20:15.188 "data_size": 63488 00:20:15.188 } 00:20:15.188 ] 00:20:15.188 }' 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.188 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.447 [2024-12-06 13:14:21.715196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.447 [2024-12-06 13:14:21.756946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.447 [2024-12-06 13:14:21.757114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.447 [2024-12-06 13:14:21.757142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.447 [2024-12-06 13:14:21.757166] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.447 "name": "raid_bdev1", 00:20:15.447 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:15.447 "strip_size_kb": 0, 00:20:15.447 "state": "online", 00:20:15.447 "raid_level": "raid1", 00:20:15.447 "superblock": true, 00:20:15.447 "num_base_bdevs": 2, 00:20:15.447 "num_base_bdevs_discovered": 1, 00:20:15.447 "num_base_bdevs_operational": 1, 00:20:15.447 "base_bdevs_list": [ 00:20:15.447 { 00:20:15.447 "name": null, 00:20:15.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.447 "is_configured": false, 00:20:15.447 "data_offset": 0, 00:20:15.447 "data_size": 63488 00:20:15.447 }, 00:20:15.447 { 00:20:15.447 "name": "BaseBdev2", 00:20:15.447 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:15.447 "is_configured": true, 00:20:15.447 "data_offset": 2048, 00:20:15.447 "data_size": 63488 00:20:15.447 } 00:20:15.447 ] 00:20:15.447 }' 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.447 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.012 "name": "raid_bdev1", 00:20:16.012 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:16.012 "strip_size_kb": 0, 00:20:16.012 "state": "online", 00:20:16.012 "raid_level": "raid1", 00:20:16.012 "superblock": true, 00:20:16.012 "num_base_bdevs": 2, 00:20:16.012 "num_base_bdevs_discovered": 1, 00:20:16.012 "num_base_bdevs_operational": 1, 00:20:16.012 "base_bdevs_list": [ 00:20:16.012 { 00:20:16.012 "name": null, 00:20:16.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.012 "is_configured": false, 00:20:16.012 "data_offset": 0, 00:20:16.012 "data_size": 63488 00:20:16.012 }, 00:20:16.012 { 00:20:16.012 "name": "BaseBdev2", 00:20:16.012 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:16.012 "is_configured": true, 00:20:16.012 "data_offset": 2048, 00:20:16.012 "data_size": 63488 00:20:16.012 } 00:20:16.012 ] 00:20:16.012 }' 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.012 [2024-12-06 13:14:22.491946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:16.012 [2024-12-06 13:14:22.492037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.012 [2024-12-06 13:14:22.492073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:16.012 [2024-12-06 13:14:22.492094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.012 [2024-12-06 13:14:22.492766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.012 [2024-12-06 13:14:22.492808] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.012 [2024-12-06 13:14:22.492921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:16.012 [2024-12-06 13:14:22.492954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:16.012 [2024-12-06 13:14:22.492967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:16.012 [2024-12-06 13:14:22.492986] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:16.012 BaseBdev1 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.012 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.388 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.388 "name": "raid_bdev1", 00:20:17.388 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:17.388 "strip_size_kb": 0, 00:20:17.388 "state": "online", 00:20:17.388 "raid_level": "raid1", 00:20:17.388 "superblock": true, 00:20:17.388 "num_base_bdevs": 2, 00:20:17.389 "num_base_bdevs_discovered": 1, 00:20:17.389 "num_base_bdevs_operational": 1, 00:20:17.389 "base_bdevs_list": [ 00:20:17.389 { 00:20:17.389 "name": null, 00:20:17.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.389 "is_configured": false, 00:20:17.389 "data_offset": 0, 00:20:17.389 "data_size": 63488 00:20:17.389 }, 00:20:17.389 { 00:20:17.389 "name": "BaseBdev2", 00:20:17.389 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:17.389 "is_configured": true, 00:20:17.389 "data_offset": 2048, 00:20:17.389 "data_size": 63488 00:20:17.389 } 00:20:17.389 ] 00:20:17.389 }' 00:20:17.389 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.389 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.648 "name": "raid_bdev1", 00:20:17.648 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:17.648 "strip_size_kb": 0, 00:20:17.648 "state": "online", 00:20:17.648 "raid_level": "raid1", 00:20:17.648 "superblock": true, 00:20:17.648 "num_base_bdevs": 2, 00:20:17.648 "num_base_bdevs_discovered": 1, 00:20:17.648 "num_base_bdevs_operational": 1, 00:20:17.648 "base_bdevs_list": [ 00:20:17.648 { 00:20:17.648 "name": null, 00:20:17.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.648 "is_configured": false, 00:20:17.648 "data_offset": 0, 00:20:17.648 "data_size": 63488 00:20:17.648 }, 00:20:17.648 { 00:20:17.648 "name": "BaseBdev2", 00:20:17.648 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:17.648 "is_configured": true, 00:20:17.648 "data_offset": 2048, 00:20:17.648 "data_size": 63488 00:20:17.648 } 00:20:17.648 ] 00:20:17.648 }' 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.648 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.906 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.906 [2024-12-06 13:14:24.205002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.906 [2024-12-06 13:14:24.205250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:17.906 [2024-12-06 13:14:24.205272] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:17.906 request: 00:20:17.906 { 00:20:17.906 "base_bdev": "BaseBdev1", 00:20:17.906 "raid_bdev": "raid_bdev1", 00:20:17.906 "method": "bdev_raid_add_base_bdev", 00:20:17.906 "req_id": 1 00:20:17.907 } 00:20:17.907 Got JSON-RPC error response 00:20:17.907 response: 00:20:17.907 { 00:20:17.907 "code": -22, 00:20:17.907 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:17.907 } 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:17.907 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.842 "name": "raid_bdev1", 00:20:18.842 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:18.842 "strip_size_kb": 0, 00:20:18.842 "state": "online", 00:20:18.842 "raid_level": "raid1", 00:20:18.842 "superblock": true, 00:20:18.842 "num_base_bdevs": 2, 00:20:18.842 "num_base_bdevs_discovered": 1, 00:20:18.842 "num_base_bdevs_operational": 1, 00:20:18.842 "base_bdevs_list": [ 00:20:18.842 { 00:20:18.842 "name": null, 00:20:18.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.842 "is_configured": false, 00:20:18.842 "data_offset": 0, 00:20:18.842 "data_size": 63488 00:20:18.842 }, 00:20:18.842 { 00:20:18.842 "name": "BaseBdev2", 00:20:18.842 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:18.842 "is_configured": true, 00:20:18.842 "data_offset": 2048, 00:20:18.842 "data_size": 63488 00:20:18.842 } 00:20:18.842 ] 00:20:18.842 }' 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.842 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.408 "name": "raid_bdev1", 00:20:19.408 "uuid": "c1d5fca7-209e-4009-b850-501aaf36d370", 00:20:19.408 "strip_size_kb": 0, 00:20:19.408 "state": "online", 00:20:19.408 "raid_level": "raid1", 00:20:19.408 "superblock": true, 00:20:19.408 "num_base_bdevs": 2, 00:20:19.408 "num_base_bdevs_discovered": 1, 00:20:19.408 "num_base_bdevs_operational": 1, 00:20:19.408 "base_bdevs_list": [ 00:20:19.408 { 00:20:19.408 "name": null, 00:20:19.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.408 "is_configured": false, 00:20:19.408 "data_offset": 0, 00:20:19.408 "data_size": 63488 00:20:19.408 }, 00:20:19.408 { 00:20:19.408 "name": "BaseBdev2", 00:20:19.408 "uuid": "619b8e94-3e2a-550c-b1c2-3ddd713e9bec", 00:20:19.408 "is_configured": true, 00:20:19.408 "data_offset": 2048, 00:20:19.408 "data_size": 63488 00:20:19.408 } 00:20:19.408 ] 00:20:19.408 }' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77398 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77398 ']' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77398 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.408 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77398 00:20:19.667 killing process with pid 77398 00:20:19.667 Received shutdown signal, test time was about 18.479542 seconds 00:20:19.667 00:20:19.667 Latency(us) 00:20:19.667 [2024-12-06T13:14:26.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.667 [2024-12-06T13:14:26.196Z] =================================================================================================================== 00:20:19.667 [2024-12-06T13:14:26.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.667 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.667 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.667 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77398' 00:20:19.667 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77398 00:20:19.667 [2024-12-06 13:14:25.956121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.667 13:14:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77398 00:20:19.667 [2024-12-06 13:14:25.956301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.667 [2024-12-06 13:14:25.956384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.667 [2024-12-06 13:14:25.956401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:19.667 [2024-12-06 13:14:26.179745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:21.044 00:20:21.044 real 0m22.050s 00:20:21.044 user 0m29.825s 00:20:21.044 sys 0m2.279s 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.044 ************************************ 00:20:21.044 END TEST raid_rebuild_test_sb_io 00:20:21.044 ************************************ 00:20:21.044 13:14:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:20:21.044 13:14:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:21.044 13:14:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:21.044 13:14:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.044 13:14:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.044 ************************************ 00:20:21.044 START TEST raid_rebuild_test 00:20:21.044 ************************************ 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78100 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78100 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78100 ']' 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.044 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.045 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.304 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:21.304 Zero copy mechanism will not be used. 00:20:21.304 [2024-12-06 13:14:27.622116] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:21.304 [2024-12-06 13:14:27.622303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78100 ] 00:20:21.304 [2024-12-06 13:14:27.807230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.565 [2024-12-06 13:14:27.970174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.826 [2024-12-06 13:14:28.179785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.826 [2024-12-06 13:14:28.179885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 BaseBdev1_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 [2024-12-06 13:14:28.690572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.393 [2024-12-06 13:14:28.690676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.393 [2024-12-06 13:14:28.690709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:22.393 [2024-12-06 13:14:28.690728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.393 [2024-12-06 13:14:28.693617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.393 [2024-12-06 13:14:28.693675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.393 BaseBdev1 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 BaseBdev2_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 [2024-12-06 13:14:28.748195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:22.393 [2024-12-06 13:14:28.748303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.393 [2024-12-06 13:14:28.748333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:22.393 [2024-12-06 13:14:28.748351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.393 [2024-12-06 13:14:28.751277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.393 [2024-12-06 13:14:28.751334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:22.393 BaseBdev2 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 BaseBdev3_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 [2024-12-06 13:14:28.813016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:22.393 [2024-12-06 13:14:28.813099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.393 [2024-12-06 13:14:28.813133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:22.393 [2024-12-06 13:14:28.813152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.393 [2024-12-06 13:14:28.815929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.393 [2024-12-06 13:14:28.815977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:22.393 BaseBdev3 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.393 BaseBdev4_malloc 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.393 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.394 [2024-12-06 13:14:28.865316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:22.394 [2024-12-06 13:14:28.865406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.394 [2024-12-06 13:14:28.865438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:22.394 [2024-12-06 13:14:28.865482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.394 [2024-12-06 13:14:28.868256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.394 [2024-12-06 13:14:28.868308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:22.394 BaseBdev4 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.394 spare_malloc 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.394 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 spare_delay 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 [2024-12-06 13:14:28.929651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:22.653 [2024-12-06 13:14:28.929739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.653 [2024-12-06 13:14:28.929767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:22.653 [2024-12-06 13:14:28.929786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.653 [2024-12-06 13:14:28.932598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.653 [2024-12-06 13:14:28.932652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:22.653 spare 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 [2024-12-06 13:14:28.941704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.653 [2024-12-06 13:14:28.944201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.653 [2024-12-06 13:14:28.944299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:22.653 [2024-12-06 13:14:28.944386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:22.653 [2024-12-06 13:14:28.944709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:22.653 [2024-12-06 13:14:28.944745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:22.653 [2024-12-06 13:14:28.945096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:22.653 [2024-12-06 13:14:28.945346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:22.653 [2024-12-06 13:14:28.945376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:22.653 [2024-12-06 13:14:28.945642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.653 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.653 "name": "raid_bdev1", 00:20:22.653 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:22.653 "strip_size_kb": 0, 00:20:22.653 "state": "online", 00:20:22.653 "raid_level": "raid1", 00:20:22.653 "superblock": false, 00:20:22.653 "num_base_bdevs": 4, 00:20:22.653 "num_base_bdevs_discovered": 4, 00:20:22.653 "num_base_bdevs_operational": 4, 00:20:22.653 "base_bdevs_list": [ 00:20:22.653 { 00:20:22.653 "name": "BaseBdev1", 00:20:22.653 "uuid": "694c97d0-548a-56b4-ad3a-e0c894574701", 00:20:22.653 "is_configured": true, 00:20:22.653 "data_offset": 0, 00:20:22.653 "data_size": 65536 00:20:22.653 }, 00:20:22.653 { 00:20:22.653 "name": "BaseBdev2", 00:20:22.653 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:22.653 "is_configured": true, 00:20:22.653 "data_offset": 0, 00:20:22.653 "data_size": 65536 00:20:22.653 }, 00:20:22.653 { 00:20:22.653 "name": "BaseBdev3", 00:20:22.653 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:22.653 "is_configured": true, 00:20:22.653 "data_offset": 0, 00:20:22.653 "data_size": 65536 00:20:22.653 }, 00:20:22.653 { 00:20:22.653 "name": "BaseBdev4", 00:20:22.653 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:22.653 "is_configured": true, 00:20:22.653 "data_offset": 0, 00:20:22.653 "data_size": 65536 00:20:22.653 } 00:20:22.653 ] 00:20:22.653 }' 00:20:22.653 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.653 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:23.220 [2024-12-06 13:14:29.462312] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.220 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:23.221 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:23.483 [2024-12-06 13:14:29.802040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:23.483 /dev/nbd0 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.483 1+0 records in 00:20:23.483 1+0 records out 00:20:23.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342397 s, 12.0 MB/s 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:23.483 13:14:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:33.482 65536+0 records in 00:20:33.482 65536+0 records out 00:20:33.482 33554432 bytes (34 MB, 32 MiB) copied, 8.6243 s, 3.9 MB/s 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:33.482 [2024-12-06 13:14:38.766151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.482 [2024-12-06 13:14:38.794250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.482 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.482 "name": "raid_bdev1", 00:20:33.482 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:33.482 "strip_size_kb": 0, 00:20:33.482 "state": "online", 00:20:33.482 "raid_level": "raid1", 00:20:33.482 "superblock": false, 00:20:33.482 "num_base_bdevs": 4, 00:20:33.482 "num_base_bdevs_discovered": 3, 00:20:33.482 "num_base_bdevs_operational": 3, 00:20:33.482 "base_bdevs_list": [ 00:20:33.482 { 00:20:33.482 "name": null, 00:20:33.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.482 "is_configured": false, 00:20:33.482 "data_offset": 0, 00:20:33.483 "data_size": 65536 00:20:33.483 }, 00:20:33.483 { 00:20:33.483 "name": "BaseBdev2", 00:20:33.483 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:33.483 "is_configured": true, 00:20:33.483 "data_offset": 0, 00:20:33.483 "data_size": 65536 00:20:33.483 }, 00:20:33.483 { 00:20:33.483 "name": "BaseBdev3", 00:20:33.483 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:33.483 "is_configured": true, 00:20:33.483 "data_offset": 0, 00:20:33.483 "data_size": 65536 00:20:33.483 }, 00:20:33.483 { 00:20:33.483 "name": "BaseBdev4", 00:20:33.483 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:33.483 "is_configured": true, 00:20:33.483 "data_offset": 0, 00:20:33.483 "data_size": 65536 00:20:33.483 } 00:20:33.483 ] 00:20:33.483 }' 00:20:33.483 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.483 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.483 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.483 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.483 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.483 [2024-12-06 13:14:39.258367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.483 [2024-12-06 13:14:39.272564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:20:33.483 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.483 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:33.483 [2024-12-06 13:14:39.275053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.052 "name": "raid_bdev1", 00:20:34.052 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:34.052 "strip_size_kb": 0, 00:20:34.052 "state": "online", 00:20:34.052 "raid_level": "raid1", 00:20:34.052 "superblock": false, 00:20:34.052 "num_base_bdevs": 4, 00:20:34.052 "num_base_bdevs_discovered": 4, 00:20:34.052 "num_base_bdevs_operational": 4, 00:20:34.052 "process": { 00:20:34.052 "type": "rebuild", 00:20:34.052 "target": "spare", 00:20:34.052 "progress": { 00:20:34.052 "blocks": 20480, 00:20:34.052 "percent": 31 00:20:34.052 } 00:20:34.052 }, 00:20:34.052 "base_bdevs_list": [ 00:20:34.052 { 00:20:34.052 "name": "spare", 00:20:34.052 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:34.052 "is_configured": true, 00:20:34.052 "data_offset": 0, 00:20:34.052 "data_size": 65536 00:20:34.052 }, 00:20:34.052 { 00:20:34.052 "name": "BaseBdev2", 00:20:34.052 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:34.052 "is_configured": true, 00:20:34.052 "data_offset": 0, 00:20:34.052 "data_size": 65536 00:20:34.052 }, 00:20:34.052 { 00:20:34.052 "name": "BaseBdev3", 00:20:34.052 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:34.052 "is_configured": true, 00:20:34.052 "data_offset": 0, 00:20:34.052 "data_size": 65536 00:20:34.052 }, 00:20:34.052 { 00:20:34.052 "name": "BaseBdev4", 00:20:34.052 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:34.052 "is_configured": true, 00:20:34.052 "data_offset": 0, 00:20:34.052 "data_size": 65536 00:20:34.052 } 00:20:34.052 ] 00:20:34.052 }' 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.052 [2024-12-06 13:14:40.452246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.052 [2024-12-06 13:14:40.484236] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:34.052 [2024-12-06 13:14:40.484327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.052 [2024-12-06 13:14:40.484354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.052 [2024-12-06 13:14:40.484370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:34.052 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.053 "name": "raid_bdev1", 00:20:34.053 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:34.053 "strip_size_kb": 0, 00:20:34.053 "state": "online", 00:20:34.053 "raid_level": "raid1", 00:20:34.053 "superblock": false, 00:20:34.053 "num_base_bdevs": 4, 00:20:34.053 "num_base_bdevs_discovered": 3, 00:20:34.053 "num_base_bdevs_operational": 3, 00:20:34.053 "base_bdevs_list": [ 00:20:34.053 { 00:20:34.053 "name": null, 00:20:34.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.053 "is_configured": false, 00:20:34.053 "data_offset": 0, 00:20:34.053 "data_size": 65536 00:20:34.053 }, 00:20:34.053 { 00:20:34.053 "name": "BaseBdev2", 00:20:34.053 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:34.053 "is_configured": true, 00:20:34.053 "data_offset": 0, 00:20:34.053 "data_size": 65536 00:20:34.053 }, 00:20:34.053 { 00:20:34.053 "name": "BaseBdev3", 00:20:34.053 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:34.053 "is_configured": true, 00:20:34.053 "data_offset": 0, 00:20:34.053 "data_size": 65536 00:20:34.053 }, 00:20:34.053 { 00:20:34.053 "name": "BaseBdev4", 00:20:34.053 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:34.053 "is_configured": true, 00:20:34.053 "data_offset": 0, 00:20:34.053 "data_size": 65536 00:20:34.053 } 00:20:34.053 ] 00:20:34.053 }' 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.053 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.620 "name": "raid_bdev1", 00:20:34.620 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:34.620 "strip_size_kb": 0, 00:20:34.620 "state": "online", 00:20:34.620 "raid_level": "raid1", 00:20:34.620 "superblock": false, 00:20:34.620 "num_base_bdevs": 4, 00:20:34.620 "num_base_bdevs_discovered": 3, 00:20:34.620 "num_base_bdevs_operational": 3, 00:20:34.620 "base_bdevs_list": [ 00:20:34.620 { 00:20:34.620 "name": null, 00:20:34.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.620 "is_configured": false, 00:20:34.620 "data_offset": 0, 00:20:34.620 "data_size": 65536 00:20:34.620 }, 00:20:34.620 { 00:20:34.620 "name": "BaseBdev2", 00:20:34.620 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:34.620 "is_configured": true, 00:20:34.620 "data_offset": 0, 00:20:34.620 "data_size": 65536 00:20:34.620 }, 00:20:34.620 { 00:20:34.620 "name": "BaseBdev3", 00:20:34.620 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:34.620 "is_configured": true, 00:20:34.620 "data_offset": 0, 00:20:34.620 "data_size": 65536 00:20:34.620 }, 00:20:34.620 { 00:20:34.620 "name": "BaseBdev4", 00:20:34.620 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:34.620 "is_configured": true, 00:20:34.620 "data_offset": 0, 00:20:34.620 "data_size": 65536 00:20:34.620 } 00:20:34.620 ] 00:20:34.620 }' 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.620 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.879 [2024-12-06 13:14:41.176363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.879 [2024-12-06 13:14:41.189669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.879 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:34.879 [2024-12-06 13:14:41.192249] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.816 "name": "raid_bdev1", 00:20:35.816 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:35.816 "strip_size_kb": 0, 00:20:35.816 "state": "online", 00:20:35.816 "raid_level": "raid1", 00:20:35.816 "superblock": false, 00:20:35.816 "num_base_bdevs": 4, 00:20:35.816 "num_base_bdevs_discovered": 4, 00:20:35.816 "num_base_bdevs_operational": 4, 00:20:35.816 "process": { 00:20:35.816 "type": "rebuild", 00:20:35.816 "target": "spare", 00:20:35.816 "progress": { 00:20:35.816 "blocks": 20480, 00:20:35.816 "percent": 31 00:20:35.816 } 00:20:35.816 }, 00:20:35.816 "base_bdevs_list": [ 00:20:35.816 { 00:20:35.816 "name": "spare", 00:20:35.816 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:35.816 "is_configured": true, 00:20:35.816 "data_offset": 0, 00:20:35.816 "data_size": 65536 00:20:35.816 }, 00:20:35.816 { 00:20:35.816 "name": "BaseBdev2", 00:20:35.816 "uuid": "1eaef453-0f38-5e53-9a73-fdba257776dd", 00:20:35.816 "is_configured": true, 00:20:35.816 "data_offset": 0, 00:20:35.816 "data_size": 65536 00:20:35.816 }, 00:20:35.816 { 00:20:35.816 "name": "BaseBdev3", 00:20:35.816 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:35.816 "is_configured": true, 00:20:35.816 "data_offset": 0, 00:20:35.816 "data_size": 65536 00:20:35.816 }, 00:20:35.816 { 00:20:35.816 "name": "BaseBdev4", 00:20:35.816 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:35.816 "is_configured": true, 00:20:35.816 "data_offset": 0, 00:20:35.816 "data_size": 65536 00:20:35.816 } 00:20:35.816 ] 00:20:35.816 }' 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.816 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.075 [2024-12-06 13:14:42.365662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:36.075 [2024-12-06 13:14:42.401442] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.075 "name": "raid_bdev1", 00:20:36.075 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:36.075 "strip_size_kb": 0, 00:20:36.075 "state": "online", 00:20:36.075 "raid_level": "raid1", 00:20:36.075 "superblock": false, 00:20:36.075 "num_base_bdevs": 4, 00:20:36.075 "num_base_bdevs_discovered": 3, 00:20:36.075 "num_base_bdevs_operational": 3, 00:20:36.075 "process": { 00:20:36.075 "type": "rebuild", 00:20:36.075 "target": "spare", 00:20:36.075 "progress": { 00:20:36.075 "blocks": 24576, 00:20:36.075 "percent": 37 00:20:36.075 } 00:20:36.075 }, 00:20:36.075 "base_bdevs_list": [ 00:20:36.075 { 00:20:36.075 "name": "spare", 00:20:36.075 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:36.075 "is_configured": true, 00:20:36.075 "data_offset": 0, 00:20:36.075 "data_size": 65536 00:20:36.075 }, 00:20:36.075 { 00:20:36.075 "name": null, 00:20:36.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.075 "is_configured": false, 00:20:36.075 "data_offset": 0, 00:20:36.075 "data_size": 65536 00:20:36.075 }, 00:20:36.075 { 00:20:36.075 "name": "BaseBdev3", 00:20:36.075 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:36.075 "is_configured": true, 00:20:36.075 "data_offset": 0, 00:20:36.075 "data_size": 65536 00:20:36.075 }, 00:20:36.075 { 00:20:36.075 "name": "BaseBdev4", 00:20:36.075 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:36.075 "is_configured": true, 00:20:36.075 "data_offset": 0, 00:20:36.075 "data_size": 65536 00:20:36.075 } 00:20:36.075 ] 00:20:36.075 }' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.075 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.076 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.334 "name": "raid_bdev1", 00:20:36.334 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:36.334 "strip_size_kb": 0, 00:20:36.334 "state": "online", 00:20:36.334 "raid_level": "raid1", 00:20:36.334 "superblock": false, 00:20:36.334 "num_base_bdevs": 4, 00:20:36.334 "num_base_bdevs_discovered": 3, 00:20:36.334 "num_base_bdevs_operational": 3, 00:20:36.334 "process": { 00:20:36.334 "type": "rebuild", 00:20:36.334 "target": "spare", 00:20:36.334 "progress": { 00:20:36.334 "blocks": 26624, 00:20:36.334 "percent": 40 00:20:36.334 } 00:20:36.334 }, 00:20:36.334 "base_bdevs_list": [ 00:20:36.334 { 00:20:36.334 "name": "spare", 00:20:36.334 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:36.334 "is_configured": true, 00:20:36.334 "data_offset": 0, 00:20:36.334 "data_size": 65536 00:20:36.334 }, 00:20:36.334 { 00:20:36.334 "name": null, 00:20:36.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.334 "is_configured": false, 00:20:36.334 "data_offset": 0, 00:20:36.334 "data_size": 65536 00:20:36.334 }, 00:20:36.334 { 00:20:36.334 "name": "BaseBdev3", 00:20:36.334 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:36.334 "is_configured": true, 00:20:36.334 "data_offset": 0, 00:20:36.334 "data_size": 65536 00:20:36.334 }, 00:20:36.334 { 00:20:36.334 "name": "BaseBdev4", 00:20:36.334 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:36.334 "is_configured": true, 00:20:36.334 "data_offset": 0, 00:20:36.334 "data_size": 65536 00:20:36.334 } 00:20:36.334 ] 00:20:36.334 }' 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.334 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.268 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.526 "name": "raid_bdev1", 00:20:37.526 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:37.526 "strip_size_kb": 0, 00:20:37.526 "state": "online", 00:20:37.526 "raid_level": "raid1", 00:20:37.526 "superblock": false, 00:20:37.526 "num_base_bdevs": 4, 00:20:37.526 "num_base_bdevs_discovered": 3, 00:20:37.526 "num_base_bdevs_operational": 3, 00:20:37.526 "process": { 00:20:37.526 "type": "rebuild", 00:20:37.526 "target": "spare", 00:20:37.526 "progress": { 00:20:37.526 "blocks": 51200, 00:20:37.526 "percent": 78 00:20:37.526 } 00:20:37.526 }, 00:20:37.526 "base_bdevs_list": [ 00:20:37.526 { 00:20:37.526 "name": "spare", 00:20:37.526 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:37.526 "is_configured": true, 00:20:37.526 "data_offset": 0, 00:20:37.526 "data_size": 65536 00:20:37.526 }, 00:20:37.526 { 00:20:37.526 "name": null, 00:20:37.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.526 "is_configured": false, 00:20:37.526 "data_offset": 0, 00:20:37.526 "data_size": 65536 00:20:37.526 }, 00:20:37.526 { 00:20:37.526 "name": "BaseBdev3", 00:20:37.526 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:37.526 "is_configured": true, 00:20:37.526 "data_offset": 0, 00:20:37.526 "data_size": 65536 00:20:37.526 }, 00:20:37.526 { 00:20:37.526 "name": "BaseBdev4", 00:20:37.526 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:37.526 "is_configured": true, 00:20:37.526 "data_offset": 0, 00:20:37.526 "data_size": 65536 00:20:37.526 } 00:20:37.526 ] 00:20:37.526 }' 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.526 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.093 [2024-12-06 13:14:44.417299] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:38.093 [2024-12-06 13:14:44.417413] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:38.093 [2024-12-06 13:14:44.417495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.659 "name": "raid_bdev1", 00:20:38.659 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:38.659 "strip_size_kb": 0, 00:20:38.659 "state": "online", 00:20:38.659 "raid_level": "raid1", 00:20:38.659 "superblock": false, 00:20:38.659 "num_base_bdevs": 4, 00:20:38.659 "num_base_bdevs_discovered": 3, 00:20:38.659 "num_base_bdevs_operational": 3, 00:20:38.659 "base_bdevs_list": [ 00:20:38.659 { 00:20:38.659 "name": "spare", 00:20:38.659 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": null, 00:20:38.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.659 "is_configured": false, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": "BaseBdev3", 00:20:38.659 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": "BaseBdev4", 00:20:38.659 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 } 00:20:38.659 ] 00:20:38.659 }' 00:20:38.659 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.659 "name": "raid_bdev1", 00:20:38.659 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:38.659 "strip_size_kb": 0, 00:20:38.659 "state": "online", 00:20:38.659 "raid_level": "raid1", 00:20:38.659 "superblock": false, 00:20:38.659 "num_base_bdevs": 4, 00:20:38.659 "num_base_bdevs_discovered": 3, 00:20:38.659 "num_base_bdevs_operational": 3, 00:20:38.659 "base_bdevs_list": [ 00:20:38.659 { 00:20:38.659 "name": "spare", 00:20:38.659 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": null, 00:20:38.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.659 "is_configured": false, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": "BaseBdev3", 00:20:38.659 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 }, 00:20:38.659 { 00:20:38.659 "name": "BaseBdev4", 00:20:38.659 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:38.659 "is_configured": true, 00:20:38.659 "data_offset": 0, 00:20:38.659 "data_size": 65536 00:20:38.659 } 00:20:38.659 ] 00:20:38.659 }' 00:20:38.659 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.917 "name": "raid_bdev1", 00:20:38.917 "uuid": "8de0c251-3d56-480c-be4b-ae10dc26567c", 00:20:38.917 "strip_size_kb": 0, 00:20:38.917 "state": "online", 00:20:38.917 "raid_level": "raid1", 00:20:38.917 "superblock": false, 00:20:38.917 "num_base_bdevs": 4, 00:20:38.917 "num_base_bdevs_discovered": 3, 00:20:38.917 "num_base_bdevs_operational": 3, 00:20:38.917 "base_bdevs_list": [ 00:20:38.917 { 00:20:38.917 "name": "spare", 00:20:38.917 "uuid": "35df4122-31a4-5afc-8fc7-671fd7742b7b", 00:20:38.917 "is_configured": true, 00:20:38.917 "data_offset": 0, 00:20:38.917 "data_size": 65536 00:20:38.917 }, 00:20:38.917 { 00:20:38.917 "name": null, 00:20:38.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.917 "is_configured": false, 00:20:38.917 "data_offset": 0, 00:20:38.917 "data_size": 65536 00:20:38.917 }, 00:20:38.917 { 00:20:38.917 "name": "BaseBdev3", 00:20:38.917 "uuid": "cbeddb81-1072-5677-963c-1dc8926ef308", 00:20:38.917 "is_configured": true, 00:20:38.917 "data_offset": 0, 00:20:38.917 "data_size": 65536 00:20:38.917 }, 00:20:38.917 { 00:20:38.917 "name": "BaseBdev4", 00:20:38.917 "uuid": "32705569-505f-552a-b948-5ac24e571317", 00:20:38.917 "is_configured": true, 00:20:38.917 "data_offset": 0, 00:20:38.917 "data_size": 65536 00:20:38.917 } 00:20:38.917 ] 00:20:38.917 }' 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.917 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.481 [2024-12-06 13:14:45.809192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.481 [2024-12-06 13:14:45.809238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.481 [2024-12-06 13:14:45.809352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.481 [2024-12-06 13:14:45.809479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.481 [2024-12-06 13:14:45.809498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.481 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:39.739 /dev/nbd0 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:39.739 1+0 records in 00:20:39.739 1+0 records out 00:20:39.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366743 s, 11.2 MB/s 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.739 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:39.996 /dev/nbd1 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:40.261 1+0 records in 00:20:40.261 1+0 records out 00:20:40.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479464 s, 8.5 MB/s 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:40.261 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.262 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78100 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78100 ']' 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78100 00:20:40.828 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:41.087 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.087 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78100 00:20:41.088 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.088 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.088 killing process with pid 78100 00:20:41.088 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78100' 00:20:41.088 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78100 00:20:41.088 Received shutdown signal, test time was about 60.000000 seconds 00:20:41.088 00:20:41.088 Latency(us) 00:20:41.088 [2024-12-06T13:14:47.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.088 [2024-12-06T13:14:47.617Z] =================================================================================================================== 00:20:41.088 [2024-12-06T13:14:47.617Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.088 [2024-12-06 13:14:47.379380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.088 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78100 00:20:41.346 [2024-12-06 13:14:47.810523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.724 13:14:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:42.724 00:20:42.724 real 0m21.361s 00:20:42.724 user 0m23.737s 00:20:42.724 sys 0m3.577s 00:20:42.724 13:14:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.724 13:14:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.724 ************************************ 00:20:42.724 END TEST raid_rebuild_test 00:20:42.725 ************************************ 00:20:42.725 13:14:48 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:42.725 13:14:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:42.725 13:14:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.725 13:14:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.725 ************************************ 00:20:42.725 START TEST raid_rebuild_test_sb 00:20:42.725 ************************************ 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78586 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78586 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78586 ']' 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.725 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.725 [2024-12-06 13:14:49.033476] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:20:42.725 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:42.725 Zero copy mechanism will not be used. 00:20:42.725 [2024-12-06 13:14:49.033663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78586 ] 00:20:42.725 [2024-12-06 13:14:49.227297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.983 [2024-12-06 13:14:49.385716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.241 [2024-12-06 13:14:49.603535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.241 [2024-12-06 13:14:49.603577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.501 BaseBdev1_malloc 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.501 [2024-12-06 13:14:49.996501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:43.501 [2024-12-06 13:14:49.996582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.501 [2024-12-06 13:14:49.996616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:43.501 [2024-12-06 13:14:49.996637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.501 [2024-12-06 13:14:49.999429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.501 [2024-12-06 13:14:49.999508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:43.501 BaseBdev1 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.501 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.501 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:43.501 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.501 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 BaseBdev2_malloc 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 [2024-12-06 13:14:50.045161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:43.760 [2024-12-06 13:14:50.045243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.760 [2024-12-06 13:14:50.045274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:43.760 [2024-12-06 13:14:50.045293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.760 [2024-12-06 13:14:50.048223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.760 [2024-12-06 13:14:50.048276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:43.760 BaseBdev2 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 BaseBdev3_malloc 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 [2024-12-06 13:14:50.108549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:43.760 [2024-12-06 13:14:50.108625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.760 [2024-12-06 13:14:50.108659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:43.760 [2024-12-06 13:14:50.108678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.760 [2024-12-06 13:14:50.111478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.760 [2024-12-06 13:14:50.111527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:43.760 BaseBdev3 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.760 BaseBdev4_malloc 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.760 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 [2024-12-06 13:14:50.164921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:43.761 [2024-12-06 13:14:50.165004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.761 [2024-12-06 13:14:50.165037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:43.761 [2024-12-06 13:14:50.165056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.761 [2024-12-06 13:14:50.167930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.761 [2024-12-06 13:14:50.168000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:43.761 BaseBdev4 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 spare_malloc 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 spare_delay 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 [2024-12-06 13:14:50.225379] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.761 [2024-12-06 13:14:50.225467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.761 [2024-12-06 13:14:50.225501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:43.761 [2024-12-06 13:14:50.225520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.761 [2024-12-06 13:14:50.228380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.761 [2024-12-06 13:14:50.228434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.761 spare 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 [2024-12-06 13:14:50.233460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.761 [2024-12-06 13:14:50.235939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.761 [2024-12-06 13:14:50.236034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.761 [2024-12-06 13:14:50.236120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:43.761 [2024-12-06 13:14:50.236388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:43.761 [2024-12-06 13:14:50.236421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:43.761 [2024-12-06 13:14:50.236789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:43.761 [2024-12-06 13:14:50.237047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:43.761 [2024-12-06 13:14:50.237073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:43.761 [2024-12-06 13:14:50.237284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.761 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.020 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.020 "name": "raid_bdev1", 00:20:44.020 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:44.020 "strip_size_kb": 0, 00:20:44.020 "state": "online", 00:20:44.020 "raid_level": "raid1", 00:20:44.020 "superblock": true, 00:20:44.020 "num_base_bdevs": 4, 00:20:44.020 "num_base_bdevs_discovered": 4, 00:20:44.020 "num_base_bdevs_operational": 4, 00:20:44.020 "base_bdevs_list": [ 00:20:44.020 { 00:20:44.020 "name": "BaseBdev1", 00:20:44.020 "uuid": "f131e49c-037a-56b8-ad8b-4e9c937008a4", 00:20:44.020 "is_configured": true, 00:20:44.020 "data_offset": 2048, 00:20:44.020 "data_size": 63488 00:20:44.020 }, 00:20:44.020 { 00:20:44.020 "name": "BaseBdev2", 00:20:44.020 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:44.020 "is_configured": true, 00:20:44.020 "data_offset": 2048, 00:20:44.020 "data_size": 63488 00:20:44.020 }, 00:20:44.020 { 00:20:44.020 "name": "BaseBdev3", 00:20:44.020 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:44.020 "is_configured": true, 00:20:44.020 "data_offset": 2048, 00:20:44.020 "data_size": 63488 00:20:44.020 }, 00:20:44.020 { 00:20:44.020 "name": "BaseBdev4", 00:20:44.020 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:44.020 "is_configured": true, 00:20:44.020 "data_offset": 2048, 00:20:44.020 "data_size": 63488 00:20:44.020 } 00:20:44.020 ] 00:20:44.020 }' 00:20:44.020 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.020 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.279 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:44.279 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.279 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.279 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.279 [2024-12-06 13:14:50.770089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.279 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.537 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:44.796 [2024-12-06 13:14:51.125771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:44.796 /dev/nbd0 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.796 1+0 records in 00:20:44.796 1+0 records out 00:20:44.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343719 s, 11.9 MB/s 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:44.796 13:14:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:54.769 63488+0 records in 00:20:54.769 63488+0 records out 00:20:54.769 32505856 bytes (33 MB, 31 MiB) copied, 8.78817 s, 3.7 MB/s 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.769 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:54.769 [2024-12-06 13:15:00.262725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.769 [2024-12-06 13:15:00.298833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.769 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.769 "name": "raid_bdev1", 00:20:54.769 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:54.769 "strip_size_kb": 0, 00:20:54.769 "state": "online", 00:20:54.769 "raid_level": "raid1", 00:20:54.769 "superblock": true, 00:20:54.769 "num_base_bdevs": 4, 00:20:54.769 "num_base_bdevs_discovered": 3, 00:20:54.769 "num_base_bdevs_operational": 3, 00:20:54.769 "base_bdevs_list": [ 00:20:54.769 { 00:20:54.769 "name": null, 00:20:54.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.769 "is_configured": false, 00:20:54.770 "data_offset": 0, 00:20:54.770 "data_size": 63488 00:20:54.770 }, 00:20:54.770 { 00:20:54.770 "name": "BaseBdev2", 00:20:54.770 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 }, 00:20:54.770 { 00:20:54.770 "name": "BaseBdev3", 00:20:54.770 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 }, 00:20:54.770 { 00:20:54.770 "name": "BaseBdev4", 00:20:54.770 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 } 00:20:54.770 ] 00:20:54.770 }' 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.770 [2024-12-06 13:15:00.794981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.770 [2024-12-06 13:15:00.809461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.770 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:54.770 [2024-12-06 13:15:00.812086] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.335 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.595 "name": "raid_bdev1", 00:20:55.595 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:55.595 "strip_size_kb": 0, 00:20:55.595 "state": "online", 00:20:55.595 "raid_level": "raid1", 00:20:55.595 "superblock": true, 00:20:55.595 "num_base_bdevs": 4, 00:20:55.595 "num_base_bdevs_discovered": 4, 00:20:55.595 "num_base_bdevs_operational": 4, 00:20:55.595 "process": { 00:20:55.595 "type": "rebuild", 00:20:55.595 "target": "spare", 00:20:55.595 "progress": { 00:20:55.595 "blocks": 20480, 00:20:55.595 "percent": 32 00:20:55.595 } 00:20:55.595 }, 00:20:55.595 "base_bdevs_list": [ 00:20:55.595 { 00:20:55.595 "name": "spare", 00:20:55.595 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:20:55.595 "is_configured": true, 00:20:55.595 "data_offset": 2048, 00:20:55.595 "data_size": 63488 00:20:55.595 }, 00:20:55.595 { 00:20:55.595 "name": "BaseBdev2", 00:20:55.595 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:55.595 "is_configured": true, 00:20:55.595 "data_offset": 2048, 00:20:55.595 "data_size": 63488 00:20:55.595 }, 00:20:55.595 { 00:20:55.595 "name": "BaseBdev3", 00:20:55.595 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:55.595 "is_configured": true, 00:20:55.595 "data_offset": 2048, 00:20:55.595 "data_size": 63488 00:20:55.595 }, 00:20:55.595 { 00:20:55.595 "name": "BaseBdev4", 00:20:55.595 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:55.595 "is_configured": true, 00:20:55.595 "data_offset": 2048, 00:20:55.595 "data_size": 63488 00:20:55.595 } 00:20:55.595 ] 00:20:55.595 }' 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.595 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.595 [2024-12-06 13:15:01.977241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.595 [2024-12-06 13:15:02.021231] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:55.596 [2024-12-06 13:15:02.021332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.596 [2024-12-06 13:15:02.021360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.596 [2024-12-06 13:15:02.021376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.596 "name": "raid_bdev1", 00:20:55.596 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:55.596 "strip_size_kb": 0, 00:20:55.596 "state": "online", 00:20:55.596 "raid_level": "raid1", 00:20:55.596 "superblock": true, 00:20:55.596 "num_base_bdevs": 4, 00:20:55.596 "num_base_bdevs_discovered": 3, 00:20:55.596 "num_base_bdevs_operational": 3, 00:20:55.596 "base_bdevs_list": [ 00:20:55.596 { 00:20:55.596 "name": null, 00:20:55.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.596 "is_configured": false, 00:20:55.596 "data_offset": 0, 00:20:55.596 "data_size": 63488 00:20:55.596 }, 00:20:55.596 { 00:20:55.596 "name": "BaseBdev2", 00:20:55.596 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:55.596 "is_configured": true, 00:20:55.596 "data_offset": 2048, 00:20:55.596 "data_size": 63488 00:20:55.596 }, 00:20:55.596 { 00:20:55.596 "name": "BaseBdev3", 00:20:55.596 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:55.596 "is_configured": true, 00:20:55.596 "data_offset": 2048, 00:20:55.596 "data_size": 63488 00:20:55.596 }, 00:20:55.596 { 00:20:55.596 "name": "BaseBdev4", 00:20:55.596 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:55.596 "is_configured": true, 00:20:55.596 "data_offset": 2048, 00:20:55.596 "data_size": 63488 00:20:55.596 } 00:20:55.596 ] 00:20:55.596 }' 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.596 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.168 "name": "raid_bdev1", 00:20:56.168 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:56.168 "strip_size_kb": 0, 00:20:56.168 "state": "online", 00:20:56.168 "raid_level": "raid1", 00:20:56.168 "superblock": true, 00:20:56.168 "num_base_bdevs": 4, 00:20:56.168 "num_base_bdevs_discovered": 3, 00:20:56.168 "num_base_bdevs_operational": 3, 00:20:56.168 "base_bdevs_list": [ 00:20:56.168 { 00:20:56.168 "name": null, 00:20:56.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.168 "is_configured": false, 00:20:56.168 "data_offset": 0, 00:20:56.168 "data_size": 63488 00:20:56.168 }, 00:20:56.168 { 00:20:56.168 "name": "BaseBdev2", 00:20:56.168 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:56.168 "is_configured": true, 00:20:56.168 "data_offset": 2048, 00:20:56.168 "data_size": 63488 00:20:56.168 }, 00:20:56.168 { 00:20:56.168 "name": "BaseBdev3", 00:20:56.168 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:56.168 "is_configured": true, 00:20:56.168 "data_offset": 2048, 00:20:56.168 "data_size": 63488 00:20:56.168 }, 00:20:56.168 { 00:20:56.168 "name": "BaseBdev4", 00:20:56.168 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:56.168 "is_configured": true, 00:20:56.168 "data_offset": 2048, 00:20:56.168 "data_size": 63488 00:20:56.168 } 00:20:56.168 ] 00:20:56.168 }' 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.168 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.168 [2024-12-06 13:15:02.693242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.427 [2024-12-06 13:15:02.706879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:20:56.427 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.427 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:56.427 [2024-12-06 13:15:02.709416] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.360 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.360 "name": "raid_bdev1", 00:20:57.360 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:57.360 "strip_size_kb": 0, 00:20:57.360 "state": "online", 00:20:57.360 "raid_level": "raid1", 00:20:57.360 "superblock": true, 00:20:57.360 "num_base_bdevs": 4, 00:20:57.360 "num_base_bdevs_discovered": 4, 00:20:57.360 "num_base_bdevs_operational": 4, 00:20:57.360 "process": { 00:20:57.360 "type": "rebuild", 00:20:57.360 "target": "spare", 00:20:57.360 "progress": { 00:20:57.360 "blocks": 20480, 00:20:57.360 "percent": 32 00:20:57.360 } 00:20:57.360 }, 00:20:57.360 "base_bdevs_list": [ 00:20:57.360 { 00:20:57.360 "name": "spare", 00:20:57.360 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:20:57.360 "is_configured": true, 00:20:57.360 "data_offset": 2048, 00:20:57.360 "data_size": 63488 00:20:57.361 }, 00:20:57.361 { 00:20:57.361 "name": "BaseBdev2", 00:20:57.361 "uuid": "8402d000-a526-5c77-b773-68fc7eca018a", 00:20:57.361 "is_configured": true, 00:20:57.361 "data_offset": 2048, 00:20:57.361 "data_size": 63488 00:20:57.361 }, 00:20:57.361 { 00:20:57.361 "name": "BaseBdev3", 00:20:57.361 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:57.361 "is_configured": true, 00:20:57.361 "data_offset": 2048, 00:20:57.361 "data_size": 63488 00:20:57.361 }, 00:20:57.361 { 00:20:57.361 "name": "BaseBdev4", 00:20:57.361 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:57.361 "is_configured": true, 00:20:57.361 "data_offset": 2048, 00:20:57.361 "data_size": 63488 00:20:57.361 } 00:20:57.361 ] 00:20:57.361 }' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:57.361 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.361 13:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.361 [2024-12-06 13:15:03.870963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:57.620 [2024-12-06 13:15:04.018386] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.620 "name": "raid_bdev1", 00:20:57.620 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:57.620 "strip_size_kb": 0, 00:20:57.620 "state": "online", 00:20:57.620 "raid_level": "raid1", 00:20:57.620 "superblock": true, 00:20:57.620 "num_base_bdevs": 4, 00:20:57.620 "num_base_bdevs_discovered": 3, 00:20:57.620 "num_base_bdevs_operational": 3, 00:20:57.620 "process": { 00:20:57.620 "type": "rebuild", 00:20:57.620 "target": "spare", 00:20:57.620 "progress": { 00:20:57.620 "blocks": 24576, 00:20:57.620 "percent": 38 00:20:57.620 } 00:20:57.620 }, 00:20:57.620 "base_bdevs_list": [ 00:20:57.620 { 00:20:57.620 "name": "spare", 00:20:57.620 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:20:57.620 "is_configured": true, 00:20:57.620 "data_offset": 2048, 00:20:57.620 "data_size": 63488 00:20:57.620 }, 00:20:57.620 { 00:20:57.620 "name": null, 00:20:57.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.620 "is_configured": false, 00:20:57.620 "data_offset": 0, 00:20:57.620 "data_size": 63488 00:20:57.620 }, 00:20:57.620 { 00:20:57.620 "name": "BaseBdev3", 00:20:57.620 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:57.620 "is_configured": true, 00:20:57.620 "data_offset": 2048, 00:20:57.620 "data_size": 63488 00:20:57.620 }, 00:20:57.620 { 00:20:57.620 "name": "BaseBdev4", 00:20:57.620 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:57.620 "is_configured": true, 00:20:57.620 "data_offset": 2048, 00:20:57.620 "data_size": 63488 00:20:57.620 } 00:20:57.620 ] 00:20:57.620 }' 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.620 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=516 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.879 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.880 "name": "raid_bdev1", 00:20:57.880 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:57.880 "strip_size_kb": 0, 00:20:57.880 "state": "online", 00:20:57.880 "raid_level": "raid1", 00:20:57.880 "superblock": true, 00:20:57.880 "num_base_bdevs": 4, 00:20:57.880 "num_base_bdevs_discovered": 3, 00:20:57.880 "num_base_bdevs_operational": 3, 00:20:57.880 "process": { 00:20:57.880 "type": "rebuild", 00:20:57.880 "target": "spare", 00:20:57.880 "progress": { 00:20:57.880 "blocks": 26624, 00:20:57.880 "percent": 41 00:20:57.880 } 00:20:57.880 }, 00:20:57.880 "base_bdevs_list": [ 00:20:57.880 { 00:20:57.880 "name": "spare", 00:20:57.880 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:20:57.880 "is_configured": true, 00:20:57.880 "data_offset": 2048, 00:20:57.880 "data_size": 63488 00:20:57.880 }, 00:20:57.880 { 00:20:57.880 "name": null, 00:20:57.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.880 "is_configured": false, 00:20:57.880 "data_offset": 0, 00:20:57.880 "data_size": 63488 00:20:57.880 }, 00:20:57.880 { 00:20:57.880 "name": "BaseBdev3", 00:20:57.880 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:57.880 "is_configured": true, 00:20:57.880 "data_offset": 2048, 00:20:57.880 "data_size": 63488 00:20:57.880 }, 00:20:57.880 { 00:20:57.880 "name": "BaseBdev4", 00:20:57.880 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:57.880 "is_configured": true, 00:20:57.880 "data_offset": 2048, 00:20:57.880 "data_size": 63488 00:20:57.880 } 00:20:57.880 ] 00:20:57.880 }' 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.880 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.259 "name": "raid_bdev1", 00:20:59.259 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:20:59.259 "strip_size_kb": 0, 00:20:59.259 "state": "online", 00:20:59.259 "raid_level": "raid1", 00:20:59.259 "superblock": true, 00:20:59.259 "num_base_bdevs": 4, 00:20:59.259 "num_base_bdevs_discovered": 3, 00:20:59.259 "num_base_bdevs_operational": 3, 00:20:59.259 "process": { 00:20:59.259 "type": "rebuild", 00:20:59.259 "target": "spare", 00:20:59.259 "progress": { 00:20:59.259 "blocks": 51200, 00:20:59.259 "percent": 80 00:20:59.259 } 00:20:59.259 }, 00:20:59.259 "base_bdevs_list": [ 00:20:59.259 { 00:20:59.259 "name": "spare", 00:20:59.259 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:20:59.259 "is_configured": true, 00:20:59.259 "data_offset": 2048, 00:20:59.259 "data_size": 63488 00:20:59.259 }, 00:20:59.259 { 00:20:59.259 "name": null, 00:20:59.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.259 "is_configured": false, 00:20:59.259 "data_offset": 0, 00:20:59.259 "data_size": 63488 00:20:59.259 }, 00:20:59.259 { 00:20:59.259 "name": "BaseBdev3", 00:20:59.259 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:20:59.259 "is_configured": true, 00:20:59.259 "data_offset": 2048, 00:20:59.259 "data_size": 63488 00:20:59.259 }, 00:20:59.259 { 00:20:59.259 "name": "BaseBdev4", 00:20:59.259 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:20:59.259 "is_configured": true, 00:20:59.259 "data_offset": 2048, 00:20:59.259 "data_size": 63488 00:20:59.259 } 00:20:59.259 ] 00:20:59.259 }' 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.259 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.517 [2024-12-06 13:15:05.932194] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:59.517 [2024-12-06 13:15:05.932292] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:59.517 [2024-12-06 13:15:05.932477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.084 "name": "raid_bdev1", 00:21:00.084 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:00.084 "strip_size_kb": 0, 00:21:00.084 "state": "online", 00:21:00.084 "raid_level": "raid1", 00:21:00.084 "superblock": true, 00:21:00.084 "num_base_bdevs": 4, 00:21:00.084 "num_base_bdevs_discovered": 3, 00:21:00.084 "num_base_bdevs_operational": 3, 00:21:00.084 "base_bdevs_list": [ 00:21:00.084 { 00:21:00.084 "name": "spare", 00:21:00.084 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:00.084 "is_configured": true, 00:21:00.084 "data_offset": 2048, 00:21:00.084 "data_size": 63488 00:21:00.084 }, 00:21:00.084 { 00:21:00.084 "name": null, 00:21:00.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.084 "is_configured": false, 00:21:00.084 "data_offset": 0, 00:21:00.084 "data_size": 63488 00:21:00.084 }, 00:21:00.084 { 00:21:00.084 "name": "BaseBdev3", 00:21:00.084 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:00.084 "is_configured": true, 00:21:00.084 "data_offset": 2048, 00:21:00.084 "data_size": 63488 00:21:00.084 }, 00:21:00.084 { 00:21:00.084 "name": "BaseBdev4", 00:21:00.084 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:00.084 "is_configured": true, 00:21:00.084 "data_offset": 2048, 00:21:00.084 "data_size": 63488 00:21:00.084 } 00:21:00.084 ] 00:21:00.084 }' 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:00.084 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.343 "name": "raid_bdev1", 00:21:00.343 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:00.343 "strip_size_kb": 0, 00:21:00.343 "state": "online", 00:21:00.343 "raid_level": "raid1", 00:21:00.343 "superblock": true, 00:21:00.343 "num_base_bdevs": 4, 00:21:00.343 "num_base_bdevs_discovered": 3, 00:21:00.343 "num_base_bdevs_operational": 3, 00:21:00.343 "base_bdevs_list": [ 00:21:00.343 { 00:21:00.343 "name": "spare", 00:21:00.343 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": null, 00:21:00.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.343 "is_configured": false, 00:21:00.343 "data_offset": 0, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": "BaseBdev3", 00:21:00.343 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": "BaseBdev4", 00:21:00.343 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 } 00:21:00.343 ] 00:21:00.343 }' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.343 "name": "raid_bdev1", 00:21:00.343 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:00.343 "strip_size_kb": 0, 00:21:00.343 "state": "online", 00:21:00.343 "raid_level": "raid1", 00:21:00.343 "superblock": true, 00:21:00.343 "num_base_bdevs": 4, 00:21:00.343 "num_base_bdevs_discovered": 3, 00:21:00.343 "num_base_bdevs_operational": 3, 00:21:00.343 "base_bdevs_list": [ 00:21:00.343 { 00:21:00.343 "name": "spare", 00:21:00.343 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": null, 00:21:00.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.343 "is_configured": false, 00:21:00.343 "data_offset": 0, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": "BaseBdev3", 00:21:00.343 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 }, 00:21:00.343 { 00:21:00.343 "name": "BaseBdev4", 00:21:00.343 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:00.343 "is_configured": true, 00:21:00.343 "data_offset": 2048, 00:21:00.343 "data_size": 63488 00:21:00.343 } 00:21:00.343 ] 00:21:00.343 }' 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.343 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.910 [2024-12-06 13:15:07.312309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.910 [2024-12-06 13:15:07.312351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.910 [2024-12-06 13:15:07.312482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.910 [2024-12-06 13:15:07.312615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.910 [2024-12-06 13:15:07.312644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.910 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:01.168 /dev/nbd0 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.427 1+0 records in 00:21:01.427 1+0 records out 00:21:01.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033678 s, 12.2 MB/s 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.427 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:01.686 /dev/nbd1 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.686 1+0 records in 00:21:01.686 1+0 records out 00:21:01.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353271 s, 11.6 MB/s 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.686 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.944 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.202 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.461 [2024-12-06 13:15:08.919494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:02.461 [2024-12-06 13:15:08.919560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.461 [2024-12-06 13:15:08.919594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:02.461 [2024-12-06 13:15:08.919611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.461 [2024-12-06 13:15:08.922648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.461 [2024-12-06 13:15:08.922696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:02.461 [2024-12-06 13:15:08.922828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:02.461 [2024-12-06 13:15:08.922901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.461 [2024-12-06 13:15:08.923096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.461 [2024-12-06 13:15:08.923247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.461 spare 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.461 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.719 [2024-12-06 13:15:09.023390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:02.719 [2024-12-06 13:15:09.023473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:02.719 [2024-12-06 13:15:09.023899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:02.720 [2024-12-06 13:15:09.024177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:02.720 [2024-12-06 13:15:09.024210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:02.720 [2024-12-06 13:15:09.024472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.720 "name": "raid_bdev1", 00:21:02.720 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:02.720 "strip_size_kb": 0, 00:21:02.720 "state": "online", 00:21:02.720 "raid_level": "raid1", 00:21:02.720 "superblock": true, 00:21:02.720 "num_base_bdevs": 4, 00:21:02.720 "num_base_bdevs_discovered": 3, 00:21:02.720 "num_base_bdevs_operational": 3, 00:21:02.720 "base_bdevs_list": [ 00:21:02.720 { 00:21:02.720 "name": "spare", 00:21:02.720 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:02.720 "is_configured": true, 00:21:02.720 "data_offset": 2048, 00:21:02.720 "data_size": 63488 00:21:02.720 }, 00:21:02.720 { 00:21:02.720 "name": null, 00:21:02.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.720 "is_configured": false, 00:21:02.720 "data_offset": 2048, 00:21:02.720 "data_size": 63488 00:21:02.720 }, 00:21:02.720 { 00:21:02.720 "name": "BaseBdev3", 00:21:02.720 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:02.720 "is_configured": true, 00:21:02.720 "data_offset": 2048, 00:21:02.720 "data_size": 63488 00:21:02.720 }, 00:21:02.720 { 00:21:02.720 "name": "BaseBdev4", 00:21:02.720 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:02.720 "is_configured": true, 00:21:02.720 "data_offset": 2048, 00:21:02.720 "data_size": 63488 00:21:02.720 } 00:21:02.720 ] 00:21:02.720 }' 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.720 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.288 "name": "raid_bdev1", 00:21:03.288 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:03.288 "strip_size_kb": 0, 00:21:03.288 "state": "online", 00:21:03.288 "raid_level": "raid1", 00:21:03.288 "superblock": true, 00:21:03.288 "num_base_bdevs": 4, 00:21:03.288 "num_base_bdevs_discovered": 3, 00:21:03.288 "num_base_bdevs_operational": 3, 00:21:03.288 "base_bdevs_list": [ 00:21:03.288 { 00:21:03.288 "name": "spare", 00:21:03.288 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:03.288 "is_configured": true, 00:21:03.288 "data_offset": 2048, 00:21:03.288 "data_size": 63488 00:21:03.288 }, 00:21:03.288 { 00:21:03.288 "name": null, 00:21:03.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.288 "is_configured": false, 00:21:03.288 "data_offset": 2048, 00:21:03.288 "data_size": 63488 00:21:03.288 }, 00:21:03.288 { 00:21:03.288 "name": "BaseBdev3", 00:21:03.288 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:03.288 "is_configured": true, 00:21:03.288 "data_offset": 2048, 00:21:03.288 "data_size": 63488 00:21:03.288 }, 00:21:03.288 { 00:21:03.288 "name": "BaseBdev4", 00:21:03.288 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:03.288 "is_configured": true, 00:21:03.288 "data_offset": 2048, 00:21:03.288 "data_size": 63488 00:21:03.288 } 00:21:03.288 ] 00:21:03.288 }' 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.546 [2024-12-06 13:15:09.840734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.546 "name": "raid_bdev1", 00:21:03.546 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:03.546 "strip_size_kb": 0, 00:21:03.546 "state": "online", 00:21:03.546 "raid_level": "raid1", 00:21:03.546 "superblock": true, 00:21:03.546 "num_base_bdevs": 4, 00:21:03.546 "num_base_bdevs_discovered": 2, 00:21:03.546 "num_base_bdevs_operational": 2, 00:21:03.546 "base_bdevs_list": [ 00:21:03.546 { 00:21:03.546 "name": null, 00:21:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.546 "is_configured": false, 00:21:03.546 "data_offset": 0, 00:21:03.546 "data_size": 63488 00:21:03.546 }, 00:21:03.546 { 00:21:03.546 "name": null, 00:21:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.546 "is_configured": false, 00:21:03.546 "data_offset": 2048, 00:21:03.546 "data_size": 63488 00:21:03.546 }, 00:21:03.546 { 00:21:03.546 "name": "BaseBdev3", 00:21:03.546 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:03.546 "is_configured": true, 00:21:03.546 "data_offset": 2048, 00:21:03.546 "data_size": 63488 00:21:03.546 }, 00:21:03.546 { 00:21:03.546 "name": "BaseBdev4", 00:21:03.546 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:03.546 "is_configured": true, 00:21:03.546 "data_offset": 2048, 00:21:03.546 "data_size": 63488 00:21:03.546 } 00:21:03.546 ] 00:21:03.546 }' 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.546 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.111 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.111 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 [2024-12-06 13:15:10.344885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.111 [2024-12-06 13:15:10.345382] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:04.111 [2024-12-06 13:15:10.345413] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:04.111 [2024-12-06 13:15:10.345491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.111 [2024-12-06 13:15:10.358855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:21:04.111 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.111 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:04.111 [2024-12-06 13:15:10.361512] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.048 "name": "raid_bdev1", 00:21:05.048 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:05.048 "strip_size_kb": 0, 00:21:05.048 "state": "online", 00:21:05.048 "raid_level": "raid1", 00:21:05.048 "superblock": true, 00:21:05.048 "num_base_bdevs": 4, 00:21:05.048 "num_base_bdevs_discovered": 3, 00:21:05.048 "num_base_bdevs_operational": 3, 00:21:05.048 "process": { 00:21:05.048 "type": "rebuild", 00:21:05.048 "target": "spare", 00:21:05.048 "progress": { 00:21:05.048 "blocks": 20480, 00:21:05.048 "percent": 32 00:21:05.048 } 00:21:05.048 }, 00:21:05.048 "base_bdevs_list": [ 00:21:05.048 { 00:21:05.048 "name": "spare", 00:21:05.048 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:05.048 "is_configured": true, 00:21:05.048 "data_offset": 2048, 00:21:05.048 "data_size": 63488 00:21:05.048 }, 00:21:05.048 { 00:21:05.048 "name": null, 00:21:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.048 "is_configured": false, 00:21:05.048 "data_offset": 2048, 00:21:05.048 "data_size": 63488 00:21:05.048 }, 00:21:05.048 { 00:21:05.048 "name": "BaseBdev3", 00:21:05.048 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:05.048 "is_configured": true, 00:21:05.048 "data_offset": 2048, 00:21:05.048 "data_size": 63488 00:21:05.048 }, 00:21:05.048 { 00:21:05.048 "name": "BaseBdev4", 00:21:05.048 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:05.048 "is_configured": true, 00:21:05.048 "data_offset": 2048, 00:21:05.048 "data_size": 63488 00:21:05.048 } 00:21:05.048 ] 00:21:05.048 }' 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.048 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.048 [2024-12-06 13:15:11.526655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.048 [2024-12-06 13:15:11.570461] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:05.048 [2024-12-06 13:15:11.570572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.048 [2024-12-06 13:15:11.570604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:05.048 [2024-12-06 13:15:11.570617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.305 "name": "raid_bdev1", 00:21:05.305 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:05.305 "strip_size_kb": 0, 00:21:05.305 "state": "online", 00:21:05.305 "raid_level": "raid1", 00:21:05.305 "superblock": true, 00:21:05.305 "num_base_bdevs": 4, 00:21:05.305 "num_base_bdevs_discovered": 2, 00:21:05.305 "num_base_bdevs_operational": 2, 00:21:05.305 "base_bdevs_list": [ 00:21:05.305 { 00:21:05.305 "name": null, 00:21:05.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.305 "is_configured": false, 00:21:05.305 "data_offset": 0, 00:21:05.305 "data_size": 63488 00:21:05.305 }, 00:21:05.305 { 00:21:05.305 "name": null, 00:21:05.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.305 "is_configured": false, 00:21:05.305 "data_offset": 2048, 00:21:05.305 "data_size": 63488 00:21:05.305 }, 00:21:05.305 { 00:21:05.305 "name": "BaseBdev3", 00:21:05.305 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:05.305 "is_configured": true, 00:21:05.305 "data_offset": 2048, 00:21:05.305 "data_size": 63488 00:21:05.305 }, 00:21:05.305 { 00:21:05.305 "name": "BaseBdev4", 00:21:05.305 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:05.305 "is_configured": true, 00:21:05.305 "data_offset": 2048, 00:21:05.305 "data_size": 63488 00:21:05.305 } 00:21:05.305 ] 00:21:05.305 }' 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.305 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.868 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.868 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.868 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.868 [2024-12-06 13:15:12.106192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.868 [2024-12-06 13:15:12.106421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.868 [2024-12-06 13:15:12.106488] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:05.868 [2024-12-06 13:15:12.106507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.868 [2024-12-06 13:15:12.107123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.868 [2024-12-06 13:15:12.107149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.868 [2024-12-06 13:15:12.107275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:05.868 [2024-12-06 13:15:12.107294] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:05.868 [2024-12-06 13:15:12.107314] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.868 [2024-12-06 13:15:12.107346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.868 [2024-12-06 13:15:12.120433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:21:05.868 spare 00:21:05.868 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.868 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:05.868 [2024-12-06 13:15:12.122925] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.797 "name": "raid_bdev1", 00:21:06.797 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:06.797 "strip_size_kb": 0, 00:21:06.797 "state": "online", 00:21:06.797 "raid_level": "raid1", 00:21:06.797 "superblock": true, 00:21:06.797 "num_base_bdevs": 4, 00:21:06.797 "num_base_bdevs_discovered": 3, 00:21:06.797 "num_base_bdevs_operational": 3, 00:21:06.797 "process": { 00:21:06.797 "type": "rebuild", 00:21:06.797 "target": "spare", 00:21:06.797 "progress": { 00:21:06.797 "blocks": 20480, 00:21:06.797 "percent": 32 00:21:06.797 } 00:21:06.797 }, 00:21:06.797 "base_bdevs_list": [ 00:21:06.797 { 00:21:06.797 "name": "spare", 00:21:06.797 "uuid": "e8138645-e521-5fc0-8a07-19da13a1cb85", 00:21:06.797 "is_configured": true, 00:21:06.797 "data_offset": 2048, 00:21:06.797 "data_size": 63488 00:21:06.797 }, 00:21:06.797 { 00:21:06.797 "name": null, 00:21:06.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.797 "is_configured": false, 00:21:06.797 "data_offset": 2048, 00:21:06.797 "data_size": 63488 00:21:06.797 }, 00:21:06.797 { 00:21:06.797 "name": "BaseBdev3", 00:21:06.797 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:06.797 "is_configured": true, 00:21:06.797 "data_offset": 2048, 00:21:06.797 "data_size": 63488 00:21:06.797 }, 00:21:06.797 { 00:21:06.797 "name": "BaseBdev4", 00:21:06.797 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:06.797 "is_configured": true, 00:21:06.797 "data_offset": 2048, 00:21:06.797 "data_size": 63488 00:21:06.797 } 00:21:06.797 ] 00:21:06.797 }' 00:21:06.797 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.798 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.798 [2024-12-06 13:15:13.284208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.054 [2024-12-06 13:15:13.331136] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.054 [2024-12-06 13:15:13.331217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.054 [2024-12-06 13:15:13.331242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.054 [2024-12-06 13:15:13.331256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.054 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.054 "name": "raid_bdev1", 00:21:07.055 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:07.055 "strip_size_kb": 0, 00:21:07.055 "state": "online", 00:21:07.055 "raid_level": "raid1", 00:21:07.055 "superblock": true, 00:21:07.055 "num_base_bdevs": 4, 00:21:07.055 "num_base_bdevs_discovered": 2, 00:21:07.055 "num_base_bdevs_operational": 2, 00:21:07.055 "base_bdevs_list": [ 00:21:07.055 { 00:21:07.055 "name": null, 00:21:07.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.055 "is_configured": false, 00:21:07.055 "data_offset": 0, 00:21:07.055 "data_size": 63488 00:21:07.055 }, 00:21:07.055 { 00:21:07.055 "name": null, 00:21:07.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.055 "is_configured": false, 00:21:07.055 "data_offset": 2048, 00:21:07.055 "data_size": 63488 00:21:07.055 }, 00:21:07.055 { 00:21:07.055 "name": "BaseBdev3", 00:21:07.055 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:07.055 "is_configured": true, 00:21:07.055 "data_offset": 2048, 00:21:07.055 "data_size": 63488 00:21:07.055 }, 00:21:07.055 { 00:21:07.055 "name": "BaseBdev4", 00:21:07.055 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:07.055 "is_configured": true, 00:21:07.055 "data_offset": 2048, 00:21:07.055 "data_size": 63488 00:21:07.055 } 00:21:07.055 ] 00:21:07.055 }' 00:21:07.055 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.055 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.620 "name": "raid_bdev1", 00:21:07.620 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:07.620 "strip_size_kb": 0, 00:21:07.620 "state": "online", 00:21:07.620 "raid_level": "raid1", 00:21:07.620 "superblock": true, 00:21:07.620 "num_base_bdevs": 4, 00:21:07.620 "num_base_bdevs_discovered": 2, 00:21:07.620 "num_base_bdevs_operational": 2, 00:21:07.620 "base_bdevs_list": [ 00:21:07.620 { 00:21:07.620 "name": null, 00:21:07.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.620 "is_configured": false, 00:21:07.620 "data_offset": 0, 00:21:07.620 "data_size": 63488 00:21:07.620 }, 00:21:07.620 { 00:21:07.620 "name": null, 00:21:07.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.620 "is_configured": false, 00:21:07.620 "data_offset": 2048, 00:21:07.620 "data_size": 63488 00:21:07.620 }, 00:21:07.620 { 00:21:07.620 "name": "BaseBdev3", 00:21:07.620 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:07.620 "is_configured": true, 00:21:07.620 "data_offset": 2048, 00:21:07.620 "data_size": 63488 00:21:07.620 }, 00:21:07.620 { 00:21:07.620 "name": "BaseBdev4", 00:21:07.620 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:07.620 "is_configured": true, 00:21:07.620 "data_offset": 2048, 00:21:07.620 "data_size": 63488 00:21:07.620 } 00:21:07.620 ] 00:21:07.620 }' 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.620 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.620 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.620 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 [2024-12-06 13:15:14.022902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:07.621 [2024-12-06 13:15:14.022972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.621 [2024-12-06 13:15:14.023002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:07.621 [2024-12-06 13:15:14.023020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.621 [2024-12-06 13:15:14.023616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.621 [2024-12-06 13:15:14.023659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:07.621 [2024-12-06 13:15:14.023762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:07.621 [2024-12-06 13:15:14.023797] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:07.621 [2024-12-06 13:15:14.023816] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.621 [2024-12-06 13:15:14.023850] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:07.621 BaseBdev1 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.621 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.552 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.809 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.809 "name": "raid_bdev1", 00:21:08.809 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:08.809 "strip_size_kb": 0, 00:21:08.809 "state": "online", 00:21:08.809 "raid_level": "raid1", 00:21:08.809 "superblock": true, 00:21:08.809 "num_base_bdevs": 4, 00:21:08.809 "num_base_bdevs_discovered": 2, 00:21:08.809 "num_base_bdevs_operational": 2, 00:21:08.809 "base_bdevs_list": [ 00:21:08.809 { 00:21:08.810 "name": null, 00:21:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.810 "is_configured": false, 00:21:08.810 "data_offset": 0, 00:21:08.810 "data_size": 63488 00:21:08.810 }, 00:21:08.810 { 00:21:08.810 "name": null, 00:21:08.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.810 "is_configured": false, 00:21:08.810 "data_offset": 2048, 00:21:08.810 "data_size": 63488 00:21:08.810 }, 00:21:08.810 { 00:21:08.810 "name": "BaseBdev3", 00:21:08.810 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:08.810 "is_configured": true, 00:21:08.810 "data_offset": 2048, 00:21:08.810 "data_size": 63488 00:21:08.810 }, 00:21:08.810 { 00:21:08.810 "name": "BaseBdev4", 00:21:08.810 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:08.810 "is_configured": true, 00:21:08.810 "data_offset": 2048, 00:21:08.810 "data_size": 63488 00:21:08.810 } 00:21:08.810 ] 00:21:08.810 }' 00:21:08.810 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.810 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.067 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.324 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.324 "name": "raid_bdev1", 00:21:09.324 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:09.324 "strip_size_kb": 0, 00:21:09.324 "state": "online", 00:21:09.324 "raid_level": "raid1", 00:21:09.324 "superblock": true, 00:21:09.324 "num_base_bdevs": 4, 00:21:09.324 "num_base_bdevs_discovered": 2, 00:21:09.324 "num_base_bdevs_operational": 2, 00:21:09.324 "base_bdevs_list": [ 00:21:09.324 { 00:21:09.324 "name": null, 00:21:09.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.324 "is_configured": false, 00:21:09.325 "data_offset": 0, 00:21:09.325 "data_size": 63488 00:21:09.325 }, 00:21:09.325 { 00:21:09.325 "name": null, 00:21:09.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.325 "is_configured": false, 00:21:09.325 "data_offset": 2048, 00:21:09.325 "data_size": 63488 00:21:09.325 }, 00:21:09.325 { 00:21:09.325 "name": "BaseBdev3", 00:21:09.325 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:09.325 "is_configured": true, 00:21:09.325 "data_offset": 2048, 00:21:09.325 "data_size": 63488 00:21:09.325 }, 00:21:09.325 { 00:21:09.325 "name": "BaseBdev4", 00:21:09.325 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:09.325 "is_configured": true, 00:21:09.325 "data_offset": 2048, 00:21:09.325 "data_size": 63488 00:21:09.325 } 00:21:09.325 ] 00:21:09.325 }' 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.325 [2024-12-06 13:15:15.759435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.325 [2024-12-06 13:15:15.759718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:09.325 [2024-12-06 13:15:15.759739] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:09.325 request: 00:21:09.325 { 00:21:09.325 "base_bdev": "BaseBdev1", 00:21:09.325 "raid_bdev": "raid_bdev1", 00:21:09.325 "method": "bdev_raid_add_base_bdev", 00:21:09.325 "req_id": 1 00:21:09.325 } 00:21:09.325 Got JSON-RPC error response 00:21:09.325 response: 00:21:09.325 { 00:21:09.325 "code": -22, 00:21:09.325 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:09.325 } 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:09.325 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.259 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.518 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.518 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.518 "name": "raid_bdev1", 00:21:10.518 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:10.518 "strip_size_kb": 0, 00:21:10.518 "state": "online", 00:21:10.518 "raid_level": "raid1", 00:21:10.518 "superblock": true, 00:21:10.518 "num_base_bdevs": 4, 00:21:10.518 "num_base_bdevs_discovered": 2, 00:21:10.518 "num_base_bdevs_operational": 2, 00:21:10.518 "base_bdevs_list": [ 00:21:10.518 { 00:21:10.518 "name": null, 00:21:10.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.518 "is_configured": false, 00:21:10.518 "data_offset": 0, 00:21:10.518 "data_size": 63488 00:21:10.518 }, 00:21:10.518 { 00:21:10.518 "name": null, 00:21:10.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.518 "is_configured": false, 00:21:10.518 "data_offset": 2048, 00:21:10.518 "data_size": 63488 00:21:10.518 }, 00:21:10.518 { 00:21:10.518 "name": "BaseBdev3", 00:21:10.518 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:10.518 "is_configured": true, 00:21:10.518 "data_offset": 2048, 00:21:10.518 "data_size": 63488 00:21:10.518 }, 00:21:10.518 { 00:21:10.518 "name": "BaseBdev4", 00:21:10.518 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:10.518 "is_configured": true, 00:21:10.518 "data_offset": 2048, 00:21:10.518 "data_size": 63488 00:21:10.518 } 00:21:10.518 ] 00:21:10.518 }' 00:21:10.518 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.518 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.035 "name": "raid_bdev1", 00:21:11.035 "uuid": "58f8a15a-75ab-41ec-b01a-8b24ba16e6a3", 00:21:11.035 "strip_size_kb": 0, 00:21:11.035 "state": "online", 00:21:11.035 "raid_level": "raid1", 00:21:11.035 "superblock": true, 00:21:11.035 "num_base_bdevs": 4, 00:21:11.035 "num_base_bdevs_discovered": 2, 00:21:11.035 "num_base_bdevs_operational": 2, 00:21:11.035 "base_bdevs_list": [ 00:21:11.035 { 00:21:11.035 "name": null, 00:21:11.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.035 "is_configured": false, 00:21:11.035 "data_offset": 0, 00:21:11.035 "data_size": 63488 00:21:11.035 }, 00:21:11.035 { 00:21:11.035 "name": null, 00:21:11.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.035 "is_configured": false, 00:21:11.035 "data_offset": 2048, 00:21:11.035 "data_size": 63488 00:21:11.035 }, 00:21:11.035 { 00:21:11.035 "name": "BaseBdev3", 00:21:11.035 "uuid": "4620500d-129b-5989-936d-c485e57a6b1c", 00:21:11.035 "is_configured": true, 00:21:11.035 "data_offset": 2048, 00:21:11.035 "data_size": 63488 00:21:11.035 }, 00:21:11.035 { 00:21:11.035 "name": "BaseBdev4", 00:21:11.035 "uuid": "d202da3b-125d-5424-b7bf-fafa8f051765", 00:21:11.035 "is_configured": true, 00:21:11.035 "data_offset": 2048, 00:21:11.035 "data_size": 63488 00:21:11.035 } 00:21:11.035 ] 00:21:11.035 }' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78586 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78586 ']' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78586 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78586 00:21:11.035 killing process with pid 78586 00:21:11.035 Received shutdown signal, test time was about 60.000000 seconds 00:21:11.035 00:21:11.035 Latency(us) 00:21:11.035 [2024-12-06T13:15:17.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.035 [2024-12-06T13:15:17.564Z] =================================================================================================================== 00:21:11.035 [2024-12-06T13:15:17.564Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78586' 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78586 00:21:11.035 [2024-12-06 13:15:17.490854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.035 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78586 00:21:11.035 [2024-12-06 13:15:17.491030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.035 [2024-12-06 13:15:17.491122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.035 [2024-12-06 13:15:17.491139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:11.601 [2024-12-06 13:15:17.939069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:12.975 00:21:12.975 real 0m30.295s 00:21:12.975 user 0m36.574s 00:21:12.975 sys 0m4.286s 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.975 ************************************ 00:21:12.975 END TEST raid_rebuild_test_sb 00:21:12.975 ************************************ 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.975 13:15:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:21:12.975 13:15:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:12.975 13:15:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.975 13:15:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.975 ************************************ 00:21:12.975 START TEST raid_rebuild_test_io 00:21:12.975 ************************************ 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:12.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79391 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79391 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79391 ']' 00:21:12.975 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.976 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.976 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.976 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.976 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.976 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.976 [2024-12-06 13:15:19.375110] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:12.976 [2024-12-06 13:15:19.375562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79391 ] 00:21:12.976 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:12.976 Zero copy mechanism will not be used. 00:21:13.234 [2024-12-06 13:15:19.558291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.234 [2024-12-06 13:15:19.711865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.492 [2024-12-06 13:15:19.931751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.492 [2024-12-06 13:15:19.931932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 BaseBdev1_malloc 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 [2024-12-06 13:15:20.472546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:14.060 [2024-12-06 13:15:20.472636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.060 [2024-12-06 13:15:20.472683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:14.060 [2024-12-06 13:15:20.472703] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.060 [2024-12-06 13:15:20.476120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.060 [2024-12-06 13:15:20.476311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:14.060 BaseBdev1 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 BaseBdev2_malloc 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.060 [2024-12-06 13:15:20.530856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:14.060 [2024-12-06 13:15:20.531074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.060 [2024-12-06 13:15:20.531114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:14.060 [2024-12-06 13:15:20.531134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.060 [2024-12-06 13:15:20.534125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.060 [2024-12-06 13:15:20.534318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:14.060 BaseBdev2 00:21:14.060 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.061 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:14.061 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:14.061 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.061 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 BaseBdev3_malloc 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 [2024-12-06 13:15:20.599208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:14.319 [2024-12-06 13:15:20.599300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.319 [2024-12-06 13:15:20.599344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:14.319 [2024-12-06 13:15:20.599363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.319 [2024-12-06 13:15:20.602412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.319 [2024-12-06 13:15:20.602477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:14.319 BaseBdev3 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 BaseBdev4_malloc 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 [2024-12-06 13:15:20.654258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:14.319 [2024-12-06 13:15:20.654610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.319 [2024-12-06 13:15:20.654685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:14.319 [2024-12-06 13:15:20.654729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.319 [2024-12-06 13:15:20.659606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.319 [2024-12-06 13:15:20.659689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:14.319 BaseBdev4 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 spare_malloc 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 spare_delay 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 [2024-12-06 13:15:20.726842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.319 [2024-12-06 13:15:20.726933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.319 [2024-12-06 13:15:20.726962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:14.319 [2024-12-06 13:15:20.726980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.319 [2024-12-06 13:15:20.729863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.319 [2024-12-06 13:15:20.730039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.319 spare 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.319 [2024-12-06 13:15:20.738962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.319 [2024-12-06 13:15:20.741613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.319 [2024-12-06 13:15:20.741706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.319 [2024-12-06 13:15:20.741789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:14.319 [2024-12-06 13:15:20.741921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:14.319 [2024-12-06 13:15:20.741944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:14.319 [2024-12-06 13:15:20.742298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:14.319 [2024-12-06 13:15:20.742560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:14.319 [2024-12-06 13:15:20.742581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:14.319 [2024-12-06 13:15:20.742768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.319 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.320 "name": "raid_bdev1", 00:21:14.320 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:14.320 "strip_size_kb": 0, 00:21:14.320 "state": "online", 00:21:14.320 "raid_level": "raid1", 00:21:14.320 "superblock": false, 00:21:14.320 "num_base_bdevs": 4, 00:21:14.320 "num_base_bdevs_discovered": 4, 00:21:14.320 "num_base_bdevs_operational": 4, 00:21:14.320 "base_bdevs_list": [ 00:21:14.320 { 00:21:14.320 "name": "BaseBdev1", 00:21:14.320 "uuid": "abbe7a2e-97f9-59d3-ba51-6b286f5a00e1", 00:21:14.320 "is_configured": true, 00:21:14.320 "data_offset": 0, 00:21:14.320 "data_size": 65536 00:21:14.320 }, 00:21:14.320 { 00:21:14.320 "name": "BaseBdev2", 00:21:14.320 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:14.320 "is_configured": true, 00:21:14.320 "data_offset": 0, 00:21:14.320 "data_size": 65536 00:21:14.320 }, 00:21:14.320 { 00:21:14.320 "name": "BaseBdev3", 00:21:14.320 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:14.320 "is_configured": true, 00:21:14.320 "data_offset": 0, 00:21:14.320 "data_size": 65536 00:21:14.320 }, 00:21:14.320 { 00:21:14.320 "name": "BaseBdev4", 00:21:14.320 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:14.320 "is_configured": true, 00:21:14.320 "data_offset": 0, 00:21:14.320 "data_size": 65536 00:21:14.320 } 00:21:14.320 ] 00:21:14.320 }' 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.320 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.885 [2024-12-06 13:15:21.295650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.885 [2024-12-06 13:15:21.403142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.885 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.193 "name": "raid_bdev1", 00:21:15.193 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:15.193 "strip_size_kb": 0, 00:21:15.193 "state": "online", 00:21:15.193 "raid_level": "raid1", 00:21:15.193 "superblock": false, 00:21:15.193 "num_base_bdevs": 4, 00:21:15.193 "num_base_bdevs_discovered": 3, 00:21:15.193 "num_base_bdevs_operational": 3, 00:21:15.193 "base_bdevs_list": [ 00:21:15.193 { 00:21:15.193 "name": null, 00:21:15.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.193 "is_configured": false, 00:21:15.193 "data_offset": 0, 00:21:15.193 "data_size": 65536 00:21:15.193 }, 00:21:15.193 { 00:21:15.193 "name": "BaseBdev2", 00:21:15.193 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:15.193 "is_configured": true, 00:21:15.193 "data_offset": 0, 00:21:15.193 "data_size": 65536 00:21:15.193 }, 00:21:15.193 { 00:21:15.193 "name": "BaseBdev3", 00:21:15.193 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:15.193 "is_configured": true, 00:21:15.193 "data_offset": 0, 00:21:15.193 "data_size": 65536 00:21:15.193 }, 00:21:15.193 { 00:21:15.193 "name": "BaseBdev4", 00:21:15.193 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:15.193 "is_configured": true, 00:21:15.193 "data_offset": 0, 00:21:15.193 "data_size": 65536 00:21:15.193 } 00:21:15.193 ] 00:21:15.193 }' 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.193 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.193 [2024-12-06 13:15:21.527907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:15.193 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:15.193 Zero copy mechanism will not be used. 00:21:15.193 Running I/O for 60 seconds... 00:21:15.450 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:15.450 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.450 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:15.450 [2024-12-06 13:15:21.916425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.450 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.450 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:15.708 [2024-12-06 13:15:22.001043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:15.708 [2024-12-06 13:15:22.003786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.708 [2024-12-06 13:15:22.122670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:15.708 [2024-12-06 13:15:22.124477] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:15.966 [2024-12-06 13:15:22.329106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:15.966 [2024-12-06 13:15:22.329549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.225 148.00 IOPS, 444.00 MiB/s [2024-12-06T13:15:22.754Z] [2024-12-06 13:15:22.588547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:16.225 [2024-12-06 13:15:22.708741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:16.225 [2024-12-06 13:15:22.709652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.484 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.743 "name": "raid_bdev1", 00:21:16.743 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:16.743 "strip_size_kb": 0, 00:21:16.743 "state": "online", 00:21:16.743 "raid_level": "raid1", 00:21:16.743 "superblock": false, 00:21:16.743 "num_base_bdevs": 4, 00:21:16.743 "num_base_bdevs_discovered": 4, 00:21:16.743 "num_base_bdevs_operational": 4, 00:21:16.743 "process": { 00:21:16.743 "type": "rebuild", 00:21:16.743 "target": "spare", 00:21:16.743 "progress": { 00:21:16.743 "blocks": 12288, 00:21:16.743 "percent": 18 00:21:16.743 } 00:21:16.743 }, 00:21:16.743 "base_bdevs_list": [ 00:21:16.743 { 00:21:16.743 "name": "spare", 00:21:16.743 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:16.743 "is_configured": true, 00:21:16.743 "data_offset": 0, 00:21:16.743 "data_size": 65536 00:21:16.743 }, 00:21:16.743 { 00:21:16.743 "name": "BaseBdev2", 00:21:16.743 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:16.743 "is_configured": true, 00:21:16.743 "data_offset": 0, 00:21:16.743 "data_size": 65536 00:21:16.743 }, 00:21:16.743 { 00:21:16.743 "name": "BaseBdev3", 00:21:16.743 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:16.743 "is_configured": true, 00:21:16.743 "data_offset": 0, 00:21:16.743 "data_size": 65536 00:21:16.743 }, 00:21:16.743 { 00:21:16.743 "name": "BaseBdev4", 00:21:16.743 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:16.743 "is_configured": true, 00:21:16.743 "data_offset": 0, 00:21:16.743 "data_size": 65536 00:21:16.743 } 00:21:16.743 ] 00:21:16.743 }' 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.743 [2024-12-06 13:15:23.039615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:16.743 [2024-12-06 13:15:23.040397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.743 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.743 [2024-12-06 13:15:23.164943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:16.743 [2024-12-06 13:15:23.264891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:17.001 [2024-12-06 13:15:23.397547] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:17.002 [2024-12-06 13:15:23.402393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.002 [2024-12-06 13:15:23.402657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.002 [2024-12-06 13:15:23.402696] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:17.002 [2024-12-06 13:15:23.435711] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.002 "name": "raid_bdev1", 00:21:17.002 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:17.002 "strip_size_kb": 0, 00:21:17.002 "state": "online", 00:21:17.002 "raid_level": "raid1", 00:21:17.002 "superblock": false, 00:21:17.002 "num_base_bdevs": 4, 00:21:17.002 "num_base_bdevs_discovered": 3, 00:21:17.002 "num_base_bdevs_operational": 3, 00:21:17.002 "base_bdevs_list": [ 00:21:17.002 { 00:21:17.002 "name": null, 00:21:17.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.002 "is_configured": false, 00:21:17.002 "data_offset": 0, 00:21:17.002 "data_size": 65536 00:21:17.002 }, 00:21:17.002 { 00:21:17.002 "name": "BaseBdev2", 00:21:17.002 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:17.002 "is_configured": true, 00:21:17.002 "data_offset": 0, 00:21:17.002 "data_size": 65536 00:21:17.002 }, 00:21:17.002 { 00:21:17.002 "name": "BaseBdev3", 00:21:17.002 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:17.002 "is_configured": true, 00:21:17.002 "data_offset": 0, 00:21:17.002 "data_size": 65536 00:21:17.002 }, 00:21:17.002 { 00:21:17.002 "name": "BaseBdev4", 00:21:17.002 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:17.002 "is_configured": true, 00:21:17.002 "data_offset": 0, 00:21:17.002 "data_size": 65536 00:21:17.002 } 00:21:17.002 ] 00:21:17.002 }' 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.002 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.518 104.00 IOPS, 312.00 MiB/s [2024-12-06T13:15:24.047Z] 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.518 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.777 "name": "raid_bdev1", 00:21:17.777 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:17.777 "strip_size_kb": 0, 00:21:17.777 "state": "online", 00:21:17.777 "raid_level": "raid1", 00:21:17.777 "superblock": false, 00:21:17.777 "num_base_bdevs": 4, 00:21:17.777 "num_base_bdevs_discovered": 3, 00:21:17.777 "num_base_bdevs_operational": 3, 00:21:17.777 "base_bdevs_list": [ 00:21:17.777 { 00:21:17.777 "name": null, 00:21:17.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.777 "is_configured": false, 00:21:17.777 "data_offset": 0, 00:21:17.777 "data_size": 65536 00:21:17.777 }, 00:21:17.777 { 00:21:17.777 "name": "BaseBdev2", 00:21:17.777 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:17.777 "is_configured": true, 00:21:17.777 "data_offset": 0, 00:21:17.777 "data_size": 65536 00:21:17.777 }, 00:21:17.777 { 00:21:17.777 "name": "BaseBdev3", 00:21:17.777 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:17.777 "is_configured": true, 00:21:17.777 "data_offset": 0, 00:21:17.777 "data_size": 65536 00:21:17.777 }, 00:21:17.777 { 00:21:17.777 "name": "BaseBdev4", 00:21:17.777 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:17.777 "is_configured": true, 00:21:17.777 "data_offset": 0, 00:21:17.777 "data_size": 65536 00:21:17.777 } 00:21:17.777 ] 00:21:17.777 }' 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.777 [2024-12-06 13:15:24.178663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.777 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:17.777 [2024-12-06 13:15:24.272201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:17.777 [2024-12-06 13:15:24.274931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.042 [2024-12-06 13:15:24.418107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:18.042 132.33 IOPS, 397.00 MiB/s [2024-12-06T13:15:24.571Z] [2024-12-06 13:15:24.566510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.042 [2024-12-06 13:15:24.567387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.609 [2024-12-06 13:15:24.989344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:18.609 [2024-12-06 13:15:24.998694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:18.868 [2024-12-06 13:15:25.214208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:18.868 [2024-12-06 13:15:25.214575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.868 "name": "raid_bdev1", 00:21:18.868 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:18.868 "strip_size_kb": 0, 00:21:18.868 "state": "online", 00:21:18.868 "raid_level": "raid1", 00:21:18.868 "superblock": false, 00:21:18.868 "num_base_bdevs": 4, 00:21:18.868 "num_base_bdevs_discovered": 4, 00:21:18.868 "num_base_bdevs_operational": 4, 00:21:18.868 "process": { 00:21:18.868 "type": "rebuild", 00:21:18.868 "target": "spare", 00:21:18.868 "progress": { 00:21:18.868 "blocks": 10240, 00:21:18.868 "percent": 15 00:21:18.868 } 00:21:18.868 }, 00:21:18.868 "base_bdevs_list": [ 00:21:18.868 { 00:21:18.868 "name": "spare", 00:21:18.868 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:18.868 "is_configured": true, 00:21:18.868 "data_offset": 0, 00:21:18.868 "data_size": 65536 00:21:18.868 }, 00:21:18.868 { 00:21:18.868 "name": "BaseBdev2", 00:21:18.868 "uuid": "57dfd96c-8889-5058-b045-a6ee5a986dfe", 00:21:18.868 "is_configured": true, 00:21:18.868 "data_offset": 0, 00:21:18.868 "data_size": 65536 00:21:18.868 }, 00:21:18.868 { 00:21:18.868 "name": "BaseBdev3", 00:21:18.868 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:18.868 "is_configured": true, 00:21:18.868 "data_offset": 0, 00:21:18.868 "data_size": 65536 00:21:18.868 }, 00:21:18.868 { 00:21:18.868 "name": "BaseBdev4", 00:21:18.868 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:18.868 "is_configured": true, 00:21:18.868 "data_offset": 0, 00:21:18.868 "data_size": 65536 00:21:18.868 } 00:21:18.868 ] 00:21:18.868 }' 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.868 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:19.127 [2024-12-06 13:15:25.416687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:19.127 [2024-12-06 13:15:25.532767] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:19.127 [2024-12-06 13:15:25.532844] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:19.127 124.75 IOPS, 374.25 MiB/s [2024-12-06T13:15:25.656Z] 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.127 "name": "raid_bdev1", 00:21:19.127 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:19.127 "strip_size_kb": 0, 00:21:19.127 "state": "online", 00:21:19.127 "raid_level": "raid1", 00:21:19.127 "superblock": false, 00:21:19.127 "num_base_bdevs": 4, 00:21:19.127 "num_base_bdevs_discovered": 3, 00:21:19.127 "num_base_bdevs_operational": 3, 00:21:19.127 "process": { 00:21:19.127 "type": "rebuild", 00:21:19.127 "target": "spare", 00:21:19.127 "progress": { 00:21:19.127 "blocks": 12288, 00:21:19.127 "percent": 18 00:21:19.127 } 00:21:19.127 }, 00:21:19.127 "base_bdevs_list": [ 00:21:19.127 { 00:21:19.127 "name": "spare", 00:21:19.127 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:19.127 "is_configured": true, 00:21:19.127 "data_offset": 0, 00:21:19.127 "data_size": 65536 00:21:19.127 }, 00:21:19.127 { 00:21:19.127 "name": null, 00:21:19.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.127 "is_configured": false, 00:21:19.127 "data_offset": 0, 00:21:19.127 "data_size": 65536 00:21:19.127 }, 00:21:19.127 { 00:21:19.127 "name": "BaseBdev3", 00:21:19.127 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:19.127 "is_configured": true, 00:21:19.127 "data_offset": 0, 00:21:19.127 "data_size": 65536 00:21:19.127 }, 00:21:19.127 { 00:21:19.127 "name": "BaseBdev4", 00:21:19.127 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:19.127 "is_configured": true, 00:21:19.127 "data_offset": 0, 00:21:19.127 "data_size": 65536 00:21:19.127 } 00:21:19.127 ] 00:21:19.127 }' 00:21:19.127 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.385 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.386 [2024-12-06 13:15:25.677098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:19.386 [2024-12-06 13:15:25.677878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.386 "name": "raid_bdev1", 00:21:19.386 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:19.386 "strip_size_kb": 0, 00:21:19.386 "state": "online", 00:21:19.386 "raid_level": "raid1", 00:21:19.386 "superblock": false, 00:21:19.386 "num_base_bdevs": 4, 00:21:19.386 "num_base_bdevs_discovered": 3, 00:21:19.386 "num_base_bdevs_operational": 3, 00:21:19.386 "process": { 00:21:19.386 "type": "rebuild", 00:21:19.386 "target": "spare", 00:21:19.386 "progress": { 00:21:19.386 "blocks": 14336, 00:21:19.386 "percent": 21 00:21:19.386 } 00:21:19.386 }, 00:21:19.386 "base_bdevs_list": [ 00:21:19.386 { 00:21:19.386 "name": "spare", 00:21:19.386 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:19.386 "is_configured": true, 00:21:19.386 "data_offset": 0, 00:21:19.386 "data_size": 65536 00:21:19.386 }, 00:21:19.386 { 00:21:19.386 "name": null, 00:21:19.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.386 "is_configured": false, 00:21:19.386 "data_offset": 0, 00:21:19.386 "data_size": 65536 00:21:19.386 }, 00:21:19.386 { 00:21:19.386 "name": "BaseBdev3", 00:21:19.386 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:19.386 "is_configured": true, 00:21:19.386 "data_offset": 0, 00:21:19.386 "data_size": 65536 00:21:19.386 }, 00:21:19.386 { 00:21:19.386 "name": "BaseBdev4", 00:21:19.386 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:19.386 "is_configured": true, 00:21:19.386 "data_offset": 0, 00:21:19.386 "data_size": 65536 00:21:19.386 } 00:21:19.386 ] 00:21:19.386 }' 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.386 [2024-12-06 13:15:25.825389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.386 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.953 [2024-12-06 13:15:26.182532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:19.953 [2024-12-06 13:15:26.406685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:19.953 [2024-12-06 13:15:26.407364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:20.210 107.60 IOPS, 322.80 MiB/s [2024-12-06T13:15:26.739Z] [2024-12-06 13:15:26.688846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.468 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.469 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.469 [2024-12-06 13:15:26.930306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:20.469 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.469 "name": "raid_bdev1", 00:21:20.469 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:20.469 "strip_size_kb": 0, 00:21:20.469 "state": "online", 00:21:20.469 "raid_level": "raid1", 00:21:20.469 "superblock": false, 00:21:20.469 "num_base_bdevs": 4, 00:21:20.469 "num_base_bdevs_discovered": 3, 00:21:20.469 "num_base_bdevs_operational": 3, 00:21:20.469 "process": { 00:21:20.469 "type": "rebuild", 00:21:20.469 "target": "spare", 00:21:20.469 "progress": { 00:21:20.469 "blocks": 26624, 00:21:20.469 "percent": 40 00:21:20.469 } 00:21:20.469 }, 00:21:20.469 "base_bdevs_list": [ 00:21:20.469 { 00:21:20.469 "name": "spare", 00:21:20.469 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:20.469 "is_configured": true, 00:21:20.469 "data_offset": 0, 00:21:20.469 "data_size": 65536 00:21:20.469 }, 00:21:20.469 { 00:21:20.469 "name": null, 00:21:20.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.469 "is_configured": false, 00:21:20.469 "data_offset": 0, 00:21:20.469 "data_size": 65536 00:21:20.469 }, 00:21:20.469 { 00:21:20.469 "name": "BaseBdev3", 00:21:20.469 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:20.469 "is_configured": true, 00:21:20.469 "data_offset": 0, 00:21:20.469 "data_size": 65536 00:21:20.469 }, 00:21:20.469 { 00:21:20.469 "name": "BaseBdev4", 00:21:20.469 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:20.469 "is_configured": true, 00:21:20.469 "data_offset": 0, 00:21:20.469 "data_size": 65536 00:21:20.469 } 00:21:20.469 ] 00:21:20.469 }' 00:21:20.469 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.469 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.727 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.727 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.727 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:20.727 [2024-12-06 13:15:27.200975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:20.727 [2024-12-06 13:15:27.201748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:20.992 [2024-12-06 13:15:27.413065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:20.992 [2024-12-06 13:15:27.413821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:21.250 96.33 IOPS, 289.00 MiB/s [2024-12-06T13:15:27.779Z] [2024-12-06 13:15:27.775418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:21.250 [2024-12-06 13:15:27.776111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.815 "name": "raid_bdev1", 00:21:21.815 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:21.815 "strip_size_kb": 0, 00:21:21.815 "state": "online", 00:21:21.815 "raid_level": "raid1", 00:21:21.815 "superblock": false, 00:21:21.815 "num_base_bdevs": 4, 00:21:21.815 "num_base_bdevs_discovered": 3, 00:21:21.815 "num_base_bdevs_operational": 3, 00:21:21.815 "process": { 00:21:21.815 "type": "rebuild", 00:21:21.815 "target": "spare", 00:21:21.815 "progress": { 00:21:21.815 "blocks": 40960, 00:21:21.815 "percent": 62 00:21:21.815 } 00:21:21.815 }, 00:21:21.815 "base_bdevs_list": [ 00:21:21.815 { 00:21:21.815 "name": "spare", 00:21:21.815 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:21.815 "is_configured": true, 00:21:21.815 "data_offset": 0, 00:21:21.815 "data_size": 65536 00:21:21.815 }, 00:21:21.815 { 00:21:21.815 "name": null, 00:21:21.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.815 "is_configured": false, 00:21:21.815 "data_offset": 0, 00:21:21.815 "data_size": 65536 00:21:21.815 }, 00:21:21.815 { 00:21:21.815 "name": "BaseBdev3", 00:21:21.815 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:21.815 "is_configured": true, 00:21:21.815 "data_offset": 0, 00:21:21.815 "data_size": 65536 00:21:21.815 }, 00:21:21.815 { 00:21:21.815 "name": "BaseBdev4", 00:21:21.815 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:21.815 "is_configured": true, 00:21:21.815 "data_offset": 0, 00:21:21.815 "data_size": 65536 00:21:21.815 } 00:21:21.815 ] 00:21:21.815 }' 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.815 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.816 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:21.816 [2024-12-06 13:15:28.323334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:22.639 87.71 IOPS, 263.14 MiB/s [2024-12-06T13:15:29.168Z] [2024-12-06 13:15:28.885682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.898 "name": "raid_bdev1", 00:21:22.898 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:22.898 "strip_size_kb": 0, 00:21:22.898 "state": "online", 00:21:22.898 "raid_level": "raid1", 00:21:22.898 "superblock": false, 00:21:22.898 "num_base_bdevs": 4, 00:21:22.898 "num_base_bdevs_discovered": 3, 00:21:22.898 "num_base_bdevs_operational": 3, 00:21:22.898 "process": { 00:21:22.898 "type": "rebuild", 00:21:22.898 "target": "spare", 00:21:22.898 "progress": { 00:21:22.898 "blocks": 61440, 00:21:22.898 "percent": 93 00:21:22.898 } 00:21:22.898 }, 00:21:22.898 "base_bdevs_list": [ 00:21:22.898 { 00:21:22.898 "name": "spare", 00:21:22.898 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:22.898 "is_configured": true, 00:21:22.898 "data_offset": 0, 00:21:22.898 "data_size": 65536 00:21:22.898 }, 00:21:22.898 { 00:21:22.898 "name": null, 00:21:22.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.898 "is_configured": false, 00:21:22.898 "data_offset": 0, 00:21:22.898 "data_size": 65536 00:21:22.898 }, 00:21:22.898 { 00:21:22.898 "name": "BaseBdev3", 00:21:22.898 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:22.898 "is_configured": true, 00:21:22.898 "data_offset": 0, 00:21:22.898 "data_size": 65536 00:21:22.898 }, 00:21:22.898 { 00:21:22.898 "name": "BaseBdev4", 00:21:22.898 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:22.898 "is_configured": true, 00:21:22.898 "data_offset": 0, 00:21:22.898 "data_size": 65536 00:21:22.898 } 00:21:22.898 ] 00:21:22.898 }' 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.898 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:23.157 [2024-12-06 13:15:29.450294] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:23.157 [2024-12-06 13:15:29.550284] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:23.157 [2024-12-06 13:15:29.553569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.091 80.25 IOPS, 240.75 MiB/s [2024-12-06T13:15:30.620Z] 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.091 "name": "raid_bdev1", 00:21:24.091 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:24.091 "strip_size_kb": 0, 00:21:24.091 "state": "online", 00:21:24.091 "raid_level": "raid1", 00:21:24.091 "superblock": false, 00:21:24.091 "num_base_bdevs": 4, 00:21:24.091 "num_base_bdevs_discovered": 3, 00:21:24.091 "num_base_bdevs_operational": 3, 00:21:24.091 "base_bdevs_list": [ 00:21:24.091 { 00:21:24.091 "name": "spare", 00:21:24.091 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:24.091 "is_configured": true, 00:21:24.091 "data_offset": 0, 00:21:24.091 "data_size": 65536 00:21:24.091 }, 00:21:24.091 { 00:21:24.091 "name": null, 00:21:24.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.091 "is_configured": false, 00:21:24.091 "data_offset": 0, 00:21:24.091 "data_size": 65536 00:21:24.091 }, 00:21:24.091 { 00:21:24.091 "name": "BaseBdev3", 00:21:24.091 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:24.091 "is_configured": true, 00:21:24.091 "data_offset": 0, 00:21:24.091 "data_size": 65536 00:21:24.091 }, 00:21:24.091 { 00:21:24.091 "name": "BaseBdev4", 00:21:24.091 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:24.091 "is_configured": true, 00:21:24.091 "data_offset": 0, 00:21:24.091 "data_size": 65536 00:21:24.091 } 00:21:24.091 ] 00:21:24.091 }' 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.091 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.091 76.00 IOPS, 228.00 MiB/s [2024-12-06T13:15:30.620Z] 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.348 "name": "raid_bdev1", 00:21:24.348 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:24.348 "strip_size_kb": 0, 00:21:24.348 "state": "online", 00:21:24.348 "raid_level": "raid1", 00:21:24.348 "superblock": false, 00:21:24.348 "num_base_bdevs": 4, 00:21:24.348 "num_base_bdevs_discovered": 3, 00:21:24.348 "num_base_bdevs_operational": 3, 00:21:24.348 "base_bdevs_list": [ 00:21:24.348 { 00:21:24.348 "name": "spare", 00:21:24.348 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:24.348 "is_configured": true, 00:21:24.348 "data_offset": 0, 00:21:24.348 "data_size": 65536 00:21:24.348 }, 00:21:24.348 { 00:21:24.348 "name": null, 00:21:24.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.348 "is_configured": false, 00:21:24.348 "data_offset": 0, 00:21:24.348 "data_size": 65536 00:21:24.348 }, 00:21:24.348 { 00:21:24.348 "name": "BaseBdev3", 00:21:24.348 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:24.348 "is_configured": true, 00:21:24.348 "data_offset": 0, 00:21:24.348 "data_size": 65536 00:21:24.348 }, 00:21:24.348 { 00:21:24.348 "name": "BaseBdev4", 00:21:24.348 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:24.348 "is_configured": true, 00:21:24.348 "data_offset": 0, 00:21:24.348 "data_size": 65536 00:21:24.348 } 00:21:24.348 ] 00:21:24.348 }' 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.348 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.349 "name": "raid_bdev1", 00:21:24.349 "uuid": "44844a67-52ad-4931-a5bd-1a8b4c7885d4", 00:21:24.349 "strip_size_kb": 0, 00:21:24.349 "state": "online", 00:21:24.349 "raid_level": "raid1", 00:21:24.349 "superblock": false, 00:21:24.349 "num_base_bdevs": 4, 00:21:24.349 "num_base_bdevs_discovered": 3, 00:21:24.349 "num_base_bdevs_operational": 3, 00:21:24.349 "base_bdevs_list": [ 00:21:24.349 { 00:21:24.349 "name": "spare", 00:21:24.349 "uuid": "fb43bd16-8b62-5d81-a670-3d6dd6a45def", 00:21:24.349 "is_configured": true, 00:21:24.349 "data_offset": 0, 00:21:24.349 "data_size": 65536 00:21:24.349 }, 00:21:24.349 { 00:21:24.349 "name": null, 00:21:24.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.349 "is_configured": false, 00:21:24.349 "data_offset": 0, 00:21:24.349 "data_size": 65536 00:21:24.349 }, 00:21:24.349 { 00:21:24.349 "name": "BaseBdev3", 00:21:24.349 "uuid": "7f340ab3-474c-5d43-8094-82d71a0f9f26", 00:21:24.349 "is_configured": true, 00:21:24.349 "data_offset": 0, 00:21:24.349 "data_size": 65536 00:21:24.349 }, 00:21:24.349 { 00:21:24.349 "name": "BaseBdev4", 00:21:24.349 "uuid": "3489e676-0330-5a69-a21c-866e0af2e77b", 00:21:24.349 "is_configured": true, 00:21:24.349 "data_offset": 0, 00:21:24.349 "data_size": 65536 00:21:24.349 } 00:21:24.349 ] 00:21:24.349 }' 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.349 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.913 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.913 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.913 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.913 [2024-12-06 13:15:31.221626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.913 [2024-12-06 13:15:31.221919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.913 00:21:24.913 Latency(us) 00:21:24.913 [2024-12-06T13:15:31.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.913 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:24.913 raid_bdev1 : 9.79 72.65 217.95 0.00 0.00 17971.77 299.75 124875.87 00:21:24.913 [2024-12-06T13:15:31.442Z] =================================================================================================================== 00:21:24.913 [2024-12-06T13:15:31.442Z] Total : 72.65 217.95 0.00 0.00 17971.77 299.75 124875.87 00:21:24.913 { 00:21:24.913 "results": [ 00:21:24.913 { 00:21:24.913 "job": "raid_bdev1", 00:21:24.913 "core_mask": "0x1", 00:21:24.913 "workload": "randrw", 00:21:24.913 "percentage": 50, 00:21:24.913 "status": "finished", 00:21:24.913 "queue_depth": 2, 00:21:24.913 "io_size": 3145728, 00:21:24.913 "runtime": 9.786663, 00:21:24.913 "iops": 72.64989097918259, 00:21:24.913 "mibps": 217.94967293754775, 00:21:24.913 "io_failed": 0, 00:21:24.913 "io_timeout": 0, 00:21:24.913 "avg_latency_us": 17971.77085283212, 00:21:24.913 "min_latency_us": 299.75272727272727, 00:21:24.913 "max_latency_us": 124875.8690909091 00:21:24.913 } 00:21:24.913 ], 00:21:24.913 "core_count": 1 00:21:24.913 } 00:21:24.913 [2024-12-06 13:15:31.337998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.913 [2024-12-06 13:15:31.338117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.914 [2024-12-06 13:15:31.338280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.914 [2024-12-06 13:15:31.338300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:24.914 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:25.172 /dev/nbd0 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.430 1+0 records in 00:21:25.430 1+0 records out 00:21:25.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368587 s, 11.1 MB/s 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.430 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:25.688 /dev/nbd1 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.688 1+0 records in 00:21:25.688 1+0 records out 00:21:25.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308997 s, 13.3 MB/s 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.688 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:25.689 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.947 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:26.205 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:26.463 /dev/nbd1 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.463 1+0 records in 00:21:26.463 1+0 records out 00:21:26.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614837 s, 6.7 MB/s 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:26.463 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.722 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.981 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:27.241 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:27.241 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:27.241 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79391 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79391 ']' 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79391 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79391 00:21:27.242 killing process with pid 79391 00:21:27.242 Received shutdown signal, test time was about 12.164566 seconds 00:21:27.242 00:21:27.242 Latency(us) 00:21:27.242 [2024-12-06T13:15:33.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.242 [2024-12-06T13:15:33.771Z] =================================================================================================================== 00:21:27.242 [2024-12-06T13:15:33.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79391' 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79391 00:21:27.242 [2024-12-06 13:15:33.695380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:27.242 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79391 00:21:27.809 [2024-12-06 13:15:34.081377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.755 ************************************ 00:21:28.755 END TEST raid_rebuild_test_io 00:21:28.755 ************************************ 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:28.755 00:21:28.755 real 0m15.949s 00:21:28.755 user 0m20.711s 00:21:28.755 sys 0m1.948s 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.755 13:15:35 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:28.755 13:15:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:28.755 13:15:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.755 13:15:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.755 ************************************ 00:21:28.755 START TEST raid_rebuild_test_sb_io 00:21:28.755 ************************************ 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79839 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79839 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79839 ']' 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.755 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.023 [2024-12-06 13:15:35.386529] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:29.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:29.023 Zero copy mechanism will not be used. 00:21:29.023 [2024-12-06 13:15:35.386695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79839 ] 00:21:29.281 [2024-12-06 13:15:35.573555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.281 [2024-12-06 13:15:35.704336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.539 [2024-12-06 13:15:35.906363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.539 [2024-12-06 13:15:35.906466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 BaseBdev1_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 [2024-12-06 13:15:36.467558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:30.107 [2024-12-06 13:15:36.467630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.107 [2024-12-06 13:15:36.467672] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:30.107 [2024-12-06 13:15:36.467693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.107 [2024-12-06 13:15:36.470551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.107 [2024-12-06 13:15:36.470595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:30.107 BaseBdev1 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 BaseBdev2_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 [2024-12-06 13:15:36.519468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:30.107 [2024-12-06 13:15:36.519543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.107 [2024-12-06 13:15:36.519575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:30.107 [2024-12-06 13:15:36.519594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.107 [2024-12-06 13:15:36.522339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.107 [2024-12-06 13:15:36.522384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:30.107 BaseBdev2 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 BaseBdev3_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.107 [2024-12-06 13:15:36.589958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:30.107 [2024-12-06 13:15:36.590026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.107 [2024-12-06 13:15:36.590058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:30.107 [2024-12-06 13:15:36.590077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.107 [2024-12-06 13:15:36.592812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.107 [2024-12-06 13:15:36.592860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:30.107 BaseBdev3 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.107 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 BaseBdev4_malloc 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 [2024-12-06 13:15:36.641754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:30.366 [2024-12-06 13:15:36.641830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.366 [2024-12-06 13:15:36.641863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:30.366 [2024-12-06 13:15:36.641882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.366 [2024-12-06 13:15:36.644619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.366 [2024-12-06 13:15:36.644667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:30.366 BaseBdev4 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 spare_malloc 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 spare_delay 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 [2024-12-06 13:15:36.701537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:30.366 [2024-12-06 13:15:36.701603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.366 [2024-12-06 13:15:36.701630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:30.366 [2024-12-06 13:15:36.701649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.366 [2024-12-06 13:15:36.704383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.366 [2024-12-06 13:15:36.704428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:30.366 spare 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 [2024-12-06 13:15:36.709593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.366 [2024-12-06 13:15:36.712007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.366 [2024-12-06 13:15:36.712099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.366 [2024-12-06 13:15:36.712182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:30.366 [2024-12-06 13:15:36.712437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:30.366 [2024-12-06 13:15:36.712476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:30.366 [2024-12-06 13:15:36.712789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:30.366 [2024-12-06 13:15:36.713035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:30.366 [2024-12-06 13:15:36.713052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:30.366 [2024-12-06 13:15:36.713236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.366 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.366 "name": "raid_bdev1", 00:21:30.366 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:30.366 "strip_size_kb": 0, 00:21:30.366 "state": "online", 00:21:30.366 "raid_level": "raid1", 00:21:30.366 "superblock": true, 00:21:30.366 "num_base_bdevs": 4, 00:21:30.366 "num_base_bdevs_discovered": 4, 00:21:30.366 "num_base_bdevs_operational": 4, 00:21:30.366 "base_bdevs_list": [ 00:21:30.366 { 00:21:30.366 "name": "BaseBdev1", 00:21:30.366 "uuid": "572e636d-ba7d-52d7-a061-fe4ac05ea157", 00:21:30.366 "is_configured": true, 00:21:30.366 "data_offset": 2048, 00:21:30.366 "data_size": 63488 00:21:30.366 }, 00:21:30.366 { 00:21:30.366 "name": "BaseBdev2", 00:21:30.366 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:30.366 "is_configured": true, 00:21:30.366 "data_offset": 2048, 00:21:30.366 "data_size": 63488 00:21:30.366 }, 00:21:30.366 { 00:21:30.366 "name": "BaseBdev3", 00:21:30.366 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:30.366 "is_configured": true, 00:21:30.366 "data_offset": 2048, 00:21:30.366 "data_size": 63488 00:21:30.366 }, 00:21:30.366 { 00:21:30.366 "name": "BaseBdev4", 00:21:30.366 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:30.366 "is_configured": true, 00:21:30.366 "data_offset": 2048, 00:21:30.366 "data_size": 63488 00:21:30.366 } 00:21:30.366 ] 00:21:30.366 }' 00:21:30.367 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.367 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:30.934 [2024-12-06 13:15:37.218156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.934 [2024-12-06 13:15:37.325713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.934 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.934 "name": "raid_bdev1", 00:21:30.934 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:30.934 "strip_size_kb": 0, 00:21:30.934 "state": "online", 00:21:30.934 "raid_level": "raid1", 00:21:30.934 "superblock": true, 00:21:30.934 "num_base_bdevs": 4, 00:21:30.934 "num_base_bdevs_discovered": 3, 00:21:30.934 "num_base_bdevs_operational": 3, 00:21:30.934 "base_bdevs_list": [ 00:21:30.934 { 00:21:30.934 "name": null, 00:21:30.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.934 "is_configured": false, 00:21:30.934 "data_offset": 0, 00:21:30.934 "data_size": 63488 00:21:30.934 }, 00:21:30.934 { 00:21:30.934 "name": "BaseBdev2", 00:21:30.934 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:30.934 "is_configured": true, 00:21:30.934 "data_offset": 2048, 00:21:30.934 "data_size": 63488 00:21:30.934 }, 00:21:30.935 { 00:21:30.935 "name": "BaseBdev3", 00:21:30.935 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:30.935 "is_configured": true, 00:21:30.935 "data_offset": 2048, 00:21:30.935 "data_size": 63488 00:21:30.935 }, 00:21:30.935 { 00:21:30.935 "name": "BaseBdev4", 00:21:30.935 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:30.935 "is_configured": true, 00:21:30.935 "data_offset": 2048, 00:21:30.935 "data_size": 63488 00:21:30.935 } 00:21:30.935 ] 00:21:30.935 }' 00:21:30.935 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.935 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.935 [2024-12-06 13:15:37.453780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:30.935 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:30.935 Zero copy mechanism will not be used. 00:21:30.935 Running I/O for 60 seconds... 00:21:31.501 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.501 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.501 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.501 [2024-12-06 13:15:37.855350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.501 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.501 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:31.501 [2024-12-06 13:15:37.946533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:31.501 [2024-12-06 13:15:37.949168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.759 [2024-12-06 13:15:38.072084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:31.759 [2024-12-06 13:15:38.072773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:32.017 [2024-12-06 13:15:38.301155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:32.017 [2024-12-06 13:15:38.302065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:32.582 109.00 IOPS, 327.00 MiB/s [2024-12-06T13:15:39.111Z] 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.582 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.582 "name": "raid_bdev1", 00:21:32.582 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:32.582 "strip_size_kb": 0, 00:21:32.582 "state": "online", 00:21:32.582 "raid_level": "raid1", 00:21:32.582 "superblock": true, 00:21:32.582 "num_base_bdevs": 4, 00:21:32.582 "num_base_bdevs_discovered": 4, 00:21:32.582 "num_base_bdevs_operational": 4, 00:21:32.582 "process": { 00:21:32.582 "type": "rebuild", 00:21:32.582 "target": "spare", 00:21:32.582 "progress": { 00:21:32.582 "blocks": 10240, 00:21:32.582 "percent": 16 00:21:32.582 } 00:21:32.582 }, 00:21:32.582 "base_bdevs_list": [ 00:21:32.582 { 00:21:32.582 "name": "spare", 00:21:32.582 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:32.582 "is_configured": true, 00:21:32.582 "data_offset": 2048, 00:21:32.582 "data_size": 63488 00:21:32.582 }, 00:21:32.582 { 00:21:32.582 "name": "BaseBdev2", 00:21:32.583 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:32.583 "is_configured": true, 00:21:32.583 "data_offset": 2048, 00:21:32.583 "data_size": 63488 00:21:32.583 }, 00:21:32.583 { 00:21:32.583 "name": "BaseBdev3", 00:21:32.583 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:32.583 "is_configured": true, 00:21:32.583 "data_offset": 2048, 00:21:32.583 "data_size": 63488 00:21:32.583 }, 00:21:32.583 { 00:21:32.583 "name": "BaseBdev4", 00:21:32.583 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:32.583 "is_configured": true, 00:21:32.583 "data_offset": 2048, 00:21:32.583 "data_size": 63488 00:21:32.583 } 00:21:32.583 ] 00:21:32.583 }' 00:21:32.583 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.583 [2024-12-06 13:15:39.037720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.583 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.583 [2024-12-06 13:15:39.087322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.840 [2024-12-06 13:15:39.139659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:32.840 [2024-12-06 13:15:39.140070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:32.840 [2024-12-06 13:15:39.159344] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:32.840 [2024-12-06 13:15:39.174490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.840 [2024-12-06 13:15:39.174592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.840 [2024-12-06 13:15:39.174615] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:32.840 [2024-12-06 13:15:39.200402] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.840 "name": "raid_bdev1", 00:21:32.840 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:32.840 "strip_size_kb": 0, 00:21:32.840 "state": "online", 00:21:32.840 "raid_level": "raid1", 00:21:32.840 "superblock": true, 00:21:32.840 "num_base_bdevs": 4, 00:21:32.840 "num_base_bdevs_discovered": 3, 00:21:32.840 "num_base_bdevs_operational": 3, 00:21:32.840 "base_bdevs_list": [ 00:21:32.840 { 00:21:32.840 "name": null, 00:21:32.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.840 "is_configured": false, 00:21:32.840 "data_offset": 0, 00:21:32.840 "data_size": 63488 00:21:32.840 }, 00:21:32.840 { 00:21:32.840 "name": "BaseBdev2", 00:21:32.840 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:32.840 "is_configured": true, 00:21:32.840 "data_offset": 2048, 00:21:32.840 "data_size": 63488 00:21:32.840 }, 00:21:32.840 { 00:21:32.840 "name": "BaseBdev3", 00:21:32.840 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:32.840 "is_configured": true, 00:21:32.840 "data_offset": 2048, 00:21:32.840 "data_size": 63488 00:21:32.840 }, 00:21:32.840 { 00:21:32.840 "name": "BaseBdev4", 00:21:32.840 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:32.840 "is_configured": true, 00:21:32.840 "data_offset": 2048, 00:21:32.840 "data_size": 63488 00:21:32.840 } 00:21:32.840 ] 00:21:32.840 }' 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.840 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.373 120.00 IOPS, 360.00 MiB/s [2024-12-06T13:15:39.902Z] 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.373 "name": "raid_bdev1", 00:21:33.373 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:33.373 "strip_size_kb": 0, 00:21:33.373 "state": "online", 00:21:33.373 "raid_level": "raid1", 00:21:33.373 "superblock": true, 00:21:33.373 "num_base_bdevs": 4, 00:21:33.373 "num_base_bdevs_discovered": 3, 00:21:33.373 "num_base_bdevs_operational": 3, 00:21:33.373 "base_bdevs_list": [ 00:21:33.373 { 00:21:33.373 "name": null, 00:21:33.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.373 "is_configured": false, 00:21:33.373 "data_offset": 0, 00:21:33.373 "data_size": 63488 00:21:33.373 }, 00:21:33.373 { 00:21:33.373 "name": "BaseBdev2", 00:21:33.373 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:33.373 "is_configured": true, 00:21:33.373 "data_offset": 2048, 00:21:33.373 "data_size": 63488 00:21:33.373 }, 00:21:33.373 { 00:21:33.373 "name": "BaseBdev3", 00:21:33.373 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:33.373 "is_configured": true, 00:21:33.373 "data_offset": 2048, 00:21:33.373 "data_size": 63488 00:21:33.373 }, 00:21:33.373 { 00:21:33.373 "name": "BaseBdev4", 00:21:33.373 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:33.373 "is_configured": true, 00:21:33.373 "data_offset": 2048, 00:21:33.373 "data_size": 63488 00:21:33.373 } 00:21:33.373 ] 00:21:33.373 }' 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:33.373 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.632 [2024-12-06 13:15:39.939632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.632 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:33.632 [2024-12-06 13:15:40.009595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:33.632 [2024-12-06 13:15:40.012297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.632 [2024-12-06 13:15:40.153956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:33.632 [2024-12-06 13:15:40.155754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:33.890 [2024-12-06 13:15:40.408933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:34.149 130.33 IOPS, 391.00 MiB/s [2024-12-06T13:15:40.678Z] [2024-12-06 13:15:40.645195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:34.408 [2024-12-06 13:15:40.885590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.666 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.666 "name": "raid_bdev1", 00:21:34.666 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:34.666 "strip_size_kb": 0, 00:21:34.666 "state": "online", 00:21:34.666 "raid_level": "raid1", 00:21:34.666 "superblock": true, 00:21:34.666 "num_base_bdevs": 4, 00:21:34.666 "num_base_bdevs_discovered": 4, 00:21:34.666 "num_base_bdevs_operational": 4, 00:21:34.666 "process": { 00:21:34.666 "type": "rebuild", 00:21:34.666 "target": "spare", 00:21:34.666 "progress": { 00:21:34.666 "blocks": 10240, 00:21:34.666 "percent": 16 00:21:34.666 } 00:21:34.666 }, 00:21:34.666 "base_bdevs_list": [ 00:21:34.666 { 00:21:34.666 "name": "spare", 00:21:34.666 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:34.666 "is_configured": true, 00:21:34.666 "data_offset": 2048, 00:21:34.666 "data_size": 63488 00:21:34.666 }, 00:21:34.666 { 00:21:34.666 "name": "BaseBdev2", 00:21:34.666 "uuid": "9a35c6bf-8059-5aa7-b780-cffa7333462d", 00:21:34.666 "is_configured": true, 00:21:34.666 "data_offset": 2048, 00:21:34.666 "data_size": 63488 00:21:34.666 }, 00:21:34.666 { 00:21:34.666 "name": "BaseBdev3", 00:21:34.666 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:34.666 "is_configured": true, 00:21:34.666 "data_offset": 2048, 00:21:34.666 "data_size": 63488 00:21:34.666 }, 00:21:34.666 { 00:21:34.666 "name": "BaseBdev4", 00:21:34.666 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:34.666 "is_configured": true, 00:21:34.666 "data_offset": 2048, 00:21:34.666 "data_size": 63488 00:21:34.666 } 00:21:34.666 ] 00:21:34.666 }' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:34.666 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.666 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.666 [2024-12-06 13:15:41.151785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:34.925 [2024-12-06 13:15:41.403931] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:21:34.925 [2024-12-06 13:15:41.403999] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:34.925 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.185 "name": "raid_bdev1", 00:21:35.185 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:35.185 "strip_size_kb": 0, 00:21:35.185 "state": "online", 00:21:35.185 "raid_level": "raid1", 00:21:35.185 "superblock": true, 00:21:35.185 "num_base_bdevs": 4, 00:21:35.185 "num_base_bdevs_discovered": 3, 00:21:35.185 "num_base_bdevs_operational": 3, 00:21:35.185 "process": { 00:21:35.185 "type": "rebuild", 00:21:35.185 "target": "spare", 00:21:35.185 "progress": { 00:21:35.185 "blocks": 14336, 00:21:35.185 "percent": 22 00:21:35.185 } 00:21:35.185 }, 00:21:35.185 "base_bdevs_list": [ 00:21:35.185 { 00:21:35.185 "name": "spare", 00:21:35.185 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": null, 00:21:35.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.185 "is_configured": false, 00:21:35.185 "data_offset": 0, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": "BaseBdev3", 00:21:35.185 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": "BaseBdev4", 00:21:35.185 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 } 00:21:35.185 ] 00:21:35.185 }' 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.185 109.00 IOPS, 327.00 MiB/s [2024-12-06T13:15:41.714Z] [2024-12-06 13:15:41.519142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.185 "name": "raid_bdev1", 00:21:35.185 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:35.185 "strip_size_kb": 0, 00:21:35.185 "state": "online", 00:21:35.185 "raid_level": "raid1", 00:21:35.185 "superblock": true, 00:21:35.185 "num_base_bdevs": 4, 00:21:35.185 "num_base_bdevs_discovered": 3, 00:21:35.185 "num_base_bdevs_operational": 3, 00:21:35.185 "process": { 00:21:35.185 "type": "rebuild", 00:21:35.185 "target": "spare", 00:21:35.185 "progress": { 00:21:35.185 "blocks": 16384, 00:21:35.185 "percent": 25 00:21:35.185 } 00:21:35.185 }, 00:21:35.185 "base_bdevs_list": [ 00:21:35.185 { 00:21:35.185 "name": "spare", 00:21:35.185 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": null, 00:21:35.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.185 "is_configured": false, 00:21:35.185 "data_offset": 0, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": "BaseBdev3", 00:21:35.185 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 }, 00:21:35.185 { 00:21:35.185 "name": "BaseBdev4", 00:21:35.185 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:35.185 "is_configured": true, 00:21:35.185 "data_offset": 2048, 00:21:35.185 "data_size": 63488 00:21:35.185 } 00:21:35.185 ] 00:21:35.185 }' 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.185 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.443 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.443 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:35.443 [2024-12-06 13:15:41.940918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:36.034 [2024-12-06 13:15:42.402347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:36.292 97.00 IOPS, 291.00 MiB/s [2024-12-06T13:15:42.821Z] [2024-12-06 13:15:42.645282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:36.292 [2024-12-06 13:15:42.646011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.292 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.292 "name": "raid_bdev1", 00:21:36.292 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:36.292 "strip_size_kb": 0, 00:21:36.292 "state": "online", 00:21:36.292 "raid_level": "raid1", 00:21:36.293 "superblock": true, 00:21:36.293 "num_base_bdevs": 4, 00:21:36.293 "num_base_bdevs_discovered": 3, 00:21:36.293 "num_base_bdevs_operational": 3, 00:21:36.293 "process": { 00:21:36.293 "type": "rebuild", 00:21:36.293 "target": "spare", 00:21:36.293 "progress": { 00:21:36.293 "blocks": 32768, 00:21:36.293 "percent": 51 00:21:36.293 } 00:21:36.293 }, 00:21:36.293 "base_bdevs_list": [ 00:21:36.293 { 00:21:36.293 "name": "spare", 00:21:36.293 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:36.293 "is_configured": true, 00:21:36.293 "data_offset": 2048, 00:21:36.293 "data_size": 63488 00:21:36.293 }, 00:21:36.293 { 00:21:36.293 "name": null, 00:21:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.293 "is_configured": false, 00:21:36.293 "data_offset": 0, 00:21:36.293 "data_size": 63488 00:21:36.293 }, 00:21:36.293 { 00:21:36.293 "name": "BaseBdev3", 00:21:36.293 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:36.293 "is_configured": true, 00:21:36.293 "data_offset": 2048, 00:21:36.293 "data_size": 63488 00:21:36.293 }, 00:21:36.293 { 00:21:36.293 "name": "BaseBdev4", 00:21:36.293 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:36.293 "is_configured": true, 00:21:36.293 "data_offset": 2048, 00:21:36.293 "data_size": 63488 00:21:36.293 } 00:21:36.293 ] 00:21:36.293 }' 00:21:36.293 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.551 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.551 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.551 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.551 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:36.551 [2024-12-06 13:15:43.037009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:36.810 [2024-12-06 13:15:43.269972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:36.810 [2024-12-06 13:15:43.270720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:37.636 88.33 IOPS, 265.00 MiB/s [2024-12-06T13:15:44.165Z] 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.636 "name": "raid_bdev1", 00:21:37.636 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:37.636 "strip_size_kb": 0, 00:21:37.636 "state": "online", 00:21:37.636 "raid_level": "raid1", 00:21:37.636 "superblock": true, 00:21:37.636 "num_base_bdevs": 4, 00:21:37.636 "num_base_bdevs_discovered": 3, 00:21:37.636 "num_base_bdevs_operational": 3, 00:21:37.636 "process": { 00:21:37.636 "type": "rebuild", 00:21:37.636 "target": "spare", 00:21:37.636 "progress": { 00:21:37.636 "blocks": 49152, 00:21:37.636 "percent": 77 00:21:37.636 } 00:21:37.636 }, 00:21:37.636 "base_bdevs_list": [ 00:21:37.636 { 00:21:37.636 "name": "spare", 00:21:37.636 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:37.636 "is_configured": true, 00:21:37.636 "data_offset": 2048, 00:21:37.636 "data_size": 63488 00:21:37.636 }, 00:21:37.636 { 00:21:37.636 "name": null, 00:21:37.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.636 "is_configured": false, 00:21:37.636 "data_offset": 0, 00:21:37.636 "data_size": 63488 00:21:37.636 }, 00:21:37.636 { 00:21:37.636 "name": "BaseBdev3", 00:21:37.636 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:37.636 "is_configured": true, 00:21:37.636 "data_offset": 2048, 00:21:37.636 "data_size": 63488 00:21:37.636 }, 00:21:37.636 { 00:21:37.636 "name": "BaseBdev4", 00:21:37.636 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:37.636 "is_configured": true, 00:21:37.636 "data_offset": 2048, 00:21:37.636 "data_size": 63488 00:21:37.636 } 00:21:37.636 ] 00:21:37.636 }' 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.636 13:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.636 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.636 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:37.636 [2024-12-06 13:15:44.104818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:37.636 [2024-12-06 13:15:44.105424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:38.204 [2024-12-06 13:15:44.450678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:38.204 [2024-12-06 13:15:44.451846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:38.463 80.57 IOPS, 241.71 MiB/s [2024-12-06T13:15:44.992Z] [2024-12-06 13:15:44.916372] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:38.720 [2024-12-06 13:15:45.024561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:38.720 [2024-12-06 13:15:45.029515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.720 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.720 "name": "raid_bdev1", 00:21:38.720 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:38.720 "strip_size_kb": 0, 00:21:38.720 "state": "online", 00:21:38.720 "raid_level": "raid1", 00:21:38.720 "superblock": true, 00:21:38.720 "num_base_bdevs": 4, 00:21:38.720 "num_base_bdevs_discovered": 3, 00:21:38.720 "num_base_bdevs_operational": 3, 00:21:38.720 "base_bdevs_list": [ 00:21:38.720 { 00:21:38.720 "name": "spare", 00:21:38.720 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:38.720 "is_configured": true, 00:21:38.720 "data_offset": 2048, 00:21:38.720 "data_size": 63488 00:21:38.720 }, 00:21:38.720 { 00:21:38.720 "name": null, 00:21:38.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.720 "is_configured": false, 00:21:38.720 "data_offset": 0, 00:21:38.721 "data_size": 63488 00:21:38.721 }, 00:21:38.721 { 00:21:38.721 "name": "BaseBdev3", 00:21:38.721 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:38.721 "is_configured": true, 00:21:38.721 "data_offset": 2048, 00:21:38.721 "data_size": 63488 00:21:38.721 }, 00:21:38.721 { 00:21:38.721 "name": "BaseBdev4", 00:21:38.721 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:38.721 "is_configured": true, 00:21:38.721 "data_offset": 2048, 00:21:38.721 "data_size": 63488 00:21:38.721 } 00:21:38.721 ] 00:21:38.721 }' 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.721 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.977 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.977 "name": "raid_bdev1", 00:21:38.977 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:38.977 "strip_size_kb": 0, 00:21:38.977 "state": "online", 00:21:38.977 "raid_level": "raid1", 00:21:38.977 "superblock": true, 00:21:38.977 "num_base_bdevs": 4, 00:21:38.977 "num_base_bdevs_discovered": 3, 00:21:38.977 "num_base_bdevs_operational": 3, 00:21:38.977 "base_bdevs_list": [ 00:21:38.977 { 00:21:38.977 "name": "spare", 00:21:38.977 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:38.977 "is_configured": true, 00:21:38.977 "data_offset": 2048, 00:21:38.977 "data_size": 63488 00:21:38.977 }, 00:21:38.977 { 00:21:38.977 "name": null, 00:21:38.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.977 "is_configured": false, 00:21:38.977 "data_offset": 0, 00:21:38.977 "data_size": 63488 00:21:38.977 }, 00:21:38.977 { 00:21:38.977 "name": "BaseBdev3", 00:21:38.977 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:38.978 "is_configured": true, 00:21:38.978 "data_offset": 2048, 00:21:38.978 "data_size": 63488 00:21:38.978 }, 00:21:38.978 { 00:21:38.978 "name": "BaseBdev4", 00:21:38.978 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:38.978 "is_configured": true, 00:21:38.978 "data_offset": 2048, 00:21:38.978 "data_size": 63488 00:21:38.978 } 00:21:38.978 ] 00:21:38.978 }' 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.978 "name": "raid_bdev1", 00:21:38.978 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:38.978 "strip_size_kb": 0, 00:21:38.978 "state": "online", 00:21:38.978 "raid_level": "raid1", 00:21:38.978 "superblock": true, 00:21:38.978 "num_base_bdevs": 4, 00:21:38.978 "num_base_bdevs_discovered": 3, 00:21:38.978 "num_base_bdevs_operational": 3, 00:21:38.978 "base_bdevs_list": [ 00:21:38.978 { 00:21:38.978 "name": "spare", 00:21:38.978 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:38.978 "is_configured": true, 00:21:38.978 "data_offset": 2048, 00:21:38.978 "data_size": 63488 00:21:38.978 }, 00:21:38.978 { 00:21:38.978 "name": null, 00:21:38.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.978 "is_configured": false, 00:21:38.978 "data_offset": 0, 00:21:38.978 "data_size": 63488 00:21:38.978 }, 00:21:38.978 { 00:21:38.978 "name": "BaseBdev3", 00:21:38.978 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:38.978 "is_configured": true, 00:21:38.978 "data_offset": 2048, 00:21:38.978 "data_size": 63488 00:21:38.978 }, 00:21:38.978 { 00:21:38.978 "name": "BaseBdev4", 00:21:38.978 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:38.978 "is_configured": true, 00:21:38.978 "data_offset": 2048, 00:21:38.978 "data_size": 63488 00:21:38.978 } 00:21:38.978 ] 00:21:38.978 }' 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.978 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.542 74.12 IOPS, 222.38 MiB/s [2024-12-06T13:15:46.071Z] 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.542 [2024-12-06 13:15:45.894635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.542 [2024-12-06 13:15:45.894685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.542 00:21:39.542 Latency(us) 00:21:39.542 [2024-12-06T13:15:46.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.542 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:39.542 raid_bdev1 : 8.46 71.97 215.91 0.00 0.00 18591.06 310.92 124875.87 00:21:39.542 [2024-12-06T13:15:46.071Z] =================================================================================================================== 00:21:39.542 [2024-12-06T13:15:46.071Z] Total : 71.97 215.91 0.00 0.00 18591.06 310.92 124875.87 00:21:39.542 [2024-12-06 13:15:45.938662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.542 [2024-12-06 13:15:45.938773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.542 { 00:21:39.542 "results": [ 00:21:39.542 { 00:21:39.542 "job": "raid_bdev1", 00:21:39.542 "core_mask": "0x1", 00:21:39.542 "workload": "randrw", 00:21:39.542 "percentage": 50, 00:21:39.542 "status": "finished", 00:21:39.542 "queue_depth": 2, 00:21:39.542 "io_size": 3145728, 00:21:39.542 "runtime": 8.462011, 00:21:39.542 "iops": 71.96870814750773, 00:21:39.542 "mibps": 215.90612444252318, 00:21:39.542 "io_failed": 0, 00:21:39.542 "io_timeout": 0, 00:21:39.542 "avg_latency_us": 18591.058581877893, 00:21:39.542 "min_latency_us": 310.9236363636364, 00:21:39.542 "max_latency_us": 124875.8690909091 00:21:39.542 } 00:21:39.542 ], 00:21:39.542 "core_count": 1 00:21:39.542 } 00:21:39.542 [2024-12-06 13:15:45.938917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.542 [2024-12-06 13:15:45.938949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.542 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.542 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:39.542 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.542 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.542 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:21:39.881 /dev/nbd0 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.881 1+0 records in 00:21:39.881 1+0 records out 00:21:39.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377499 s, 10.9 MB/s 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.881 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:40.140 /dev/nbd1 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:40.140 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:40.400 1+0 records in 00:21:40.400 1+0 records out 00:21:40.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491833 s, 8.3 MB/s 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:40.400 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:40.659 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:40.916 /dev/nbd1 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.173 1+0 records in 00:21:41.173 1+0 records out 00:21:41.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476774 s, 8.6 MB/s 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:21:41.173 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.174 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.431 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.690 [2024-12-06 13:15:48.163563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:41.690 [2024-12-06 13:15:48.163637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.690 [2024-12-06 13:15:48.163669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:41.690 [2024-12-06 13:15:48.163689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.690 [2024-12-06 13:15:48.166812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.690 [2024-12-06 13:15:48.166866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:41.690 [2024-12-06 13:15:48.166992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:41.690 [2024-12-06 13:15:48.167065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:41.690 [2024-12-06 13:15:48.167270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.690 [2024-12-06 13:15:48.167467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:41.690 spare 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.690 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.947 [2024-12-06 13:15:48.267614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:41.947 [2024-12-06 13:15:48.267667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:41.947 [2024-12-06 13:15:48.268122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:21:41.947 [2024-12-06 13:15:48.268417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:41.947 [2024-12-06 13:15:48.268459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:41.947 [2024-12-06 13:15:48.268731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.947 "name": "raid_bdev1", 00:21:41.947 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:41.947 "strip_size_kb": 0, 00:21:41.947 "state": "online", 00:21:41.947 "raid_level": "raid1", 00:21:41.947 "superblock": true, 00:21:41.947 "num_base_bdevs": 4, 00:21:41.947 "num_base_bdevs_discovered": 3, 00:21:41.947 "num_base_bdevs_operational": 3, 00:21:41.947 "base_bdevs_list": [ 00:21:41.947 { 00:21:41.947 "name": "spare", 00:21:41.947 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:41.947 "is_configured": true, 00:21:41.947 "data_offset": 2048, 00:21:41.947 "data_size": 63488 00:21:41.947 }, 00:21:41.947 { 00:21:41.947 "name": null, 00:21:41.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.947 "is_configured": false, 00:21:41.947 "data_offset": 2048, 00:21:41.947 "data_size": 63488 00:21:41.947 }, 00:21:41.947 { 00:21:41.947 "name": "BaseBdev3", 00:21:41.947 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:41.947 "is_configured": true, 00:21:41.947 "data_offset": 2048, 00:21:41.947 "data_size": 63488 00:21:41.947 }, 00:21:41.947 { 00:21:41.947 "name": "BaseBdev4", 00:21:41.947 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:41.947 "is_configured": true, 00:21:41.947 "data_offset": 2048, 00:21:41.947 "data_size": 63488 00:21:41.947 } 00:21:41.947 ] 00:21:41.947 }' 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.947 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.514 "name": "raid_bdev1", 00:21:42.514 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:42.514 "strip_size_kb": 0, 00:21:42.514 "state": "online", 00:21:42.514 "raid_level": "raid1", 00:21:42.514 "superblock": true, 00:21:42.514 "num_base_bdevs": 4, 00:21:42.514 "num_base_bdevs_discovered": 3, 00:21:42.514 "num_base_bdevs_operational": 3, 00:21:42.514 "base_bdevs_list": [ 00:21:42.514 { 00:21:42.514 "name": "spare", 00:21:42.514 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:42.514 "is_configured": true, 00:21:42.514 "data_offset": 2048, 00:21:42.514 "data_size": 63488 00:21:42.514 }, 00:21:42.514 { 00:21:42.514 "name": null, 00:21:42.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.514 "is_configured": false, 00:21:42.514 "data_offset": 2048, 00:21:42.514 "data_size": 63488 00:21:42.514 }, 00:21:42.514 { 00:21:42.514 "name": "BaseBdev3", 00:21:42.514 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:42.514 "is_configured": true, 00:21:42.514 "data_offset": 2048, 00:21:42.514 "data_size": 63488 00:21:42.514 }, 00:21:42.514 { 00:21:42.514 "name": "BaseBdev4", 00:21:42.514 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:42.514 "is_configured": true, 00:21:42.514 "data_offset": 2048, 00:21:42.514 "data_size": 63488 00:21:42.514 } 00:21:42.514 ] 00:21:42.514 }' 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.514 [2024-12-06 13:15:48.992997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.514 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.515 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.515 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.515 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.515 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.515 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:42.515 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.772 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.772 "name": "raid_bdev1", 00:21:42.772 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:42.772 "strip_size_kb": 0, 00:21:42.772 "state": "online", 00:21:42.772 "raid_level": "raid1", 00:21:42.772 "superblock": true, 00:21:42.772 "num_base_bdevs": 4, 00:21:42.772 "num_base_bdevs_discovered": 2, 00:21:42.772 "num_base_bdevs_operational": 2, 00:21:42.772 "base_bdevs_list": [ 00:21:42.772 { 00:21:42.772 "name": null, 00:21:42.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.772 "is_configured": false, 00:21:42.772 "data_offset": 0, 00:21:42.772 "data_size": 63488 00:21:42.772 }, 00:21:42.772 { 00:21:42.772 "name": null, 00:21:42.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.772 "is_configured": false, 00:21:42.772 "data_offset": 2048, 00:21:42.772 "data_size": 63488 00:21:42.772 }, 00:21:42.772 { 00:21:42.772 "name": "BaseBdev3", 00:21:42.772 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:42.772 "is_configured": true, 00:21:42.772 "data_offset": 2048, 00:21:42.772 "data_size": 63488 00:21:42.772 }, 00:21:42.772 { 00:21:42.772 "name": "BaseBdev4", 00:21:42.772 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:42.772 "is_configured": true, 00:21:42.772 "data_offset": 2048, 00:21:42.772 "data_size": 63488 00:21:42.772 } 00:21:42.772 ] 00:21:42.772 }' 00:21:42.772 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.772 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.030 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.030 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.030 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:43.031 [2024-12-06 13:15:49.521280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.031 [2024-12-06 13:15:49.521560] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:43.031 [2024-12-06 13:15:49.521590] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:43.031 [2024-12-06 13:15:49.521647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.031 [2024-12-06 13:15:49.535726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:21:43.031 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.031 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:43.031 [2024-12-06 13:15:49.538318] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.026 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.284 "name": "raid_bdev1", 00:21:44.284 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:44.284 "strip_size_kb": 0, 00:21:44.284 "state": "online", 00:21:44.284 "raid_level": "raid1", 00:21:44.284 "superblock": true, 00:21:44.284 "num_base_bdevs": 4, 00:21:44.284 "num_base_bdevs_discovered": 3, 00:21:44.284 "num_base_bdevs_operational": 3, 00:21:44.284 "process": { 00:21:44.284 "type": "rebuild", 00:21:44.284 "target": "spare", 00:21:44.284 "progress": { 00:21:44.284 "blocks": 20480, 00:21:44.284 "percent": 32 00:21:44.284 } 00:21:44.284 }, 00:21:44.284 "base_bdevs_list": [ 00:21:44.284 { 00:21:44.284 "name": "spare", 00:21:44.284 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:44.284 "is_configured": true, 00:21:44.284 "data_offset": 2048, 00:21:44.284 "data_size": 63488 00:21:44.284 }, 00:21:44.284 { 00:21:44.284 "name": null, 00:21:44.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.284 "is_configured": false, 00:21:44.284 "data_offset": 2048, 00:21:44.284 "data_size": 63488 00:21:44.284 }, 00:21:44.284 { 00:21:44.284 "name": "BaseBdev3", 00:21:44.284 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:44.284 "is_configured": true, 00:21:44.284 "data_offset": 2048, 00:21:44.284 "data_size": 63488 00:21:44.284 }, 00:21:44.284 { 00:21:44.284 "name": "BaseBdev4", 00:21:44.284 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:44.284 "is_configured": true, 00:21:44.284 "data_offset": 2048, 00:21:44.284 "data_size": 63488 00:21:44.284 } 00:21:44.284 ] 00:21:44.284 }' 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.284 [2024-12-06 13:15:50.695750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.284 [2024-12-06 13:15:50.747663] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.284 [2024-12-06 13:15:50.747765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.284 [2024-12-06 13:15:50.747791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.284 [2024-12-06 13:15:50.747809] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.284 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.542 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.542 "name": "raid_bdev1", 00:21:44.542 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:44.542 "strip_size_kb": 0, 00:21:44.542 "state": "online", 00:21:44.542 "raid_level": "raid1", 00:21:44.542 "superblock": true, 00:21:44.542 "num_base_bdevs": 4, 00:21:44.542 "num_base_bdevs_discovered": 2, 00:21:44.542 "num_base_bdevs_operational": 2, 00:21:44.542 "base_bdevs_list": [ 00:21:44.542 { 00:21:44.542 "name": null, 00:21:44.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.542 "is_configured": false, 00:21:44.542 "data_offset": 0, 00:21:44.542 "data_size": 63488 00:21:44.542 }, 00:21:44.542 { 00:21:44.542 "name": null, 00:21:44.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.542 "is_configured": false, 00:21:44.542 "data_offset": 2048, 00:21:44.542 "data_size": 63488 00:21:44.542 }, 00:21:44.542 { 00:21:44.542 "name": "BaseBdev3", 00:21:44.542 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:44.542 "is_configured": true, 00:21:44.542 "data_offset": 2048, 00:21:44.542 "data_size": 63488 00:21:44.542 }, 00:21:44.543 { 00:21:44.543 "name": "BaseBdev4", 00:21:44.543 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:44.543 "is_configured": true, 00:21:44.543 "data_offset": 2048, 00:21:44.543 "data_size": 63488 00:21:44.543 } 00:21:44.543 ] 00:21:44.543 }' 00:21:44.543 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.543 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:44.801 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:44.801 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.801 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.059 [2024-12-06 13:15:51.331426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.059 [2024-12-06 13:15:51.331555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.059 [2024-12-06 13:15:51.331594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:45.059 [2024-12-06 13:15:51.331614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.059 [2024-12-06 13:15:51.332265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.059 [2024-12-06 13:15:51.332305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.059 [2024-12-06 13:15:51.332429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:45.059 [2024-12-06 13:15:51.332501] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:45.059 [2024-12-06 13:15:51.332530] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:45.059 [2024-12-06 13:15:51.332564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.059 [2024-12-06 13:15:51.346776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:21:45.059 spare 00:21:45.059 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.059 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:45.059 [2024-12-06 13:15:51.349566] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.993 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.993 "name": "raid_bdev1", 00:21:45.993 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:45.993 "strip_size_kb": 0, 00:21:45.993 "state": "online", 00:21:45.993 "raid_level": "raid1", 00:21:45.993 "superblock": true, 00:21:45.993 "num_base_bdevs": 4, 00:21:45.993 "num_base_bdevs_discovered": 3, 00:21:45.993 "num_base_bdevs_operational": 3, 00:21:45.993 "process": { 00:21:45.993 "type": "rebuild", 00:21:45.993 "target": "spare", 00:21:45.993 "progress": { 00:21:45.993 "blocks": 20480, 00:21:45.993 "percent": 32 00:21:45.993 } 00:21:45.993 }, 00:21:45.993 "base_bdevs_list": [ 00:21:45.993 { 00:21:45.993 "name": "spare", 00:21:45.993 "uuid": "38809727-7c34-5c6c-9f76-a0b151cef1e5", 00:21:45.993 "is_configured": true, 00:21:45.993 "data_offset": 2048, 00:21:45.993 "data_size": 63488 00:21:45.993 }, 00:21:45.993 { 00:21:45.993 "name": null, 00:21:45.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.993 "is_configured": false, 00:21:45.993 "data_offset": 2048, 00:21:45.993 "data_size": 63488 00:21:45.993 }, 00:21:45.993 { 00:21:45.993 "name": "BaseBdev3", 00:21:45.993 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:45.993 "is_configured": true, 00:21:45.993 "data_offset": 2048, 00:21:45.993 "data_size": 63488 00:21:45.993 }, 00:21:45.993 { 00:21:45.993 "name": "BaseBdev4", 00:21:45.994 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:45.994 "is_configured": true, 00:21:45.994 "data_offset": 2048, 00:21:45.994 "data_size": 63488 00:21:45.994 } 00:21:45.994 ] 00:21:45.994 }' 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.994 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:45.994 [2024-12-06 13:15:52.515204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.252 [2024-12-06 13:15:52.559176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:46.252 [2024-12-06 13:15:52.559272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.252 [2024-12-06 13:15:52.559307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.252 [2024-12-06 13:15:52.559319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.252 "name": "raid_bdev1", 00:21:46.252 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:46.252 "strip_size_kb": 0, 00:21:46.252 "state": "online", 00:21:46.252 "raid_level": "raid1", 00:21:46.252 "superblock": true, 00:21:46.252 "num_base_bdevs": 4, 00:21:46.252 "num_base_bdevs_discovered": 2, 00:21:46.252 "num_base_bdevs_operational": 2, 00:21:46.252 "base_bdevs_list": [ 00:21:46.252 { 00:21:46.252 "name": null, 00:21:46.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.252 "is_configured": false, 00:21:46.252 "data_offset": 0, 00:21:46.252 "data_size": 63488 00:21:46.252 }, 00:21:46.252 { 00:21:46.252 "name": null, 00:21:46.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.252 "is_configured": false, 00:21:46.252 "data_offset": 2048, 00:21:46.252 "data_size": 63488 00:21:46.252 }, 00:21:46.252 { 00:21:46.252 "name": "BaseBdev3", 00:21:46.252 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:46.252 "is_configured": true, 00:21:46.252 "data_offset": 2048, 00:21:46.252 "data_size": 63488 00:21:46.252 }, 00:21:46.252 { 00:21:46.252 "name": "BaseBdev4", 00:21:46.252 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:46.252 "is_configured": true, 00:21:46.252 "data_offset": 2048, 00:21:46.252 "data_size": 63488 00:21:46.252 } 00:21:46.252 ] 00:21:46.252 }' 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.252 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.817 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.818 "name": "raid_bdev1", 00:21:46.818 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:46.818 "strip_size_kb": 0, 00:21:46.818 "state": "online", 00:21:46.818 "raid_level": "raid1", 00:21:46.818 "superblock": true, 00:21:46.818 "num_base_bdevs": 4, 00:21:46.818 "num_base_bdevs_discovered": 2, 00:21:46.818 "num_base_bdevs_operational": 2, 00:21:46.818 "base_bdevs_list": [ 00:21:46.818 { 00:21:46.818 "name": null, 00:21:46.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.818 "is_configured": false, 00:21:46.818 "data_offset": 0, 00:21:46.818 "data_size": 63488 00:21:46.818 }, 00:21:46.818 { 00:21:46.818 "name": null, 00:21:46.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.818 "is_configured": false, 00:21:46.818 "data_offset": 2048, 00:21:46.818 "data_size": 63488 00:21:46.818 }, 00:21:46.818 { 00:21:46.818 "name": "BaseBdev3", 00:21:46.818 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:46.818 "is_configured": true, 00:21:46.818 "data_offset": 2048, 00:21:46.818 "data_size": 63488 00:21:46.818 }, 00:21:46.818 { 00:21:46.818 "name": "BaseBdev4", 00:21:46.818 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:46.818 "is_configured": true, 00:21:46.818 "data_offset": 2048, 00:21:46.818 "data_size": 63488 00:21:46.818 } 00:21:46.818 ] 00:21:46.818 }' 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:46.818 [2024-12-06 13:15:53.298309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:46.818 [2024-12-06 13:15:53.298633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.818 [2024-12-06 13:15:53.298684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:46.818 [2024-12-06 13:15:53.298702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.818 [2024-12-06 13:15:53.299324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.818 [2024-12-06 13:15:53.299359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:46.818 [2024-12-06 13:15:53.299500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:46.818 [2024-12-06 13:15:53.299527] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:46.818 [2024-12-06 13:15:53.299553] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:46.818 [2024-12-06 13:15:53.299567] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:46.818 BaseBdev1 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.818 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.194 "name": "raid_bdev1", 00:21:48.194 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:48.194 "strip_size_kb": 0, 00:21:48.194 "state": "online", 00:21:48.194 "raid_level": "raid1", 00:21:48.194 "superblock": true, 00:21:48.194 "num_base_bdevs": 4, 00:21:48.194 "num_base_bdevs_discovered": 2, 00:21:48.194 "num_base_bdevs_operational": 2, 00:21:48.194 "base_bdevs_list": [ 00:21:48.194 { 00:21:48.194 "name": null, 00:21:48.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.194 "is_configured": false, 00:21:48.194 "data_offset": 0, 00:21:48.194 "data_size": 63488 00:21:48.194 }, 00:21:48.194 { 00:21:48.194 "name": null, 00:21:48.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.194 "is_configured": false, 00:21:48.194 "data_offset": 2048, 00:21:48.194 "data_size": 63488 00:21:48.194 }, 00:21:48.194 { 00:21:48.194 "name": "BaseBdev3", 00:21:48.194 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:48.194 "is_configured": true, 00:21:48.194 "data_offset": 2048, 00:21:48.194 "data_size": 63488 00:21:48.194 }, 00:21:48.194 { 00:21:48.194 "name": "BaseBdev4", 00:21:48.194 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:48.194 "is_configured": true, 00:21:48.194 "data_offset": 2048, 00:21:48.194 "data_size": 63488 00:21:48.194 } 00:21:48.194 ] 00:21:48.194 }' 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.194 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.453 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.453 "name": "raid_bdev1", 00:21:48.453 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:48.453 "strip_size_kb": 0, 00:21:48.453 "state": "online", 00:21:48.453 "raid_level": "raid1", 00:21:48.453 "superblock": true, 00:21:48.453 "num_base_bdevs": 4, 00:21:48.453 "num_base_bdevs_discovered": 2, 00:21:48.453 "num_base_bdevs_operational": 2, 00:21:48.453 "base_bdevs_list": [ 00:21:48.453 { 00:21:48.453 "name": null, 00:21:48.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.453 "is_configured": false, 00:21:48.453 "data_offset": 0, 00:21:48.453 "data_size": 63488 00:21:48.453 }, 00:21:48.453 { 00:21:48.453 "name": null, 00:21:48.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.454 "is_configured": false, 00:21:48.454 "data_offset": 2048, 00:21:48.454 "data_size": 63488 00:21:48.454 }, 00:21:48.454 { 00:21:48.454 "name": "BaseBdev3", 00:21:48.454 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:48.454 "is_configured": true, 00:21:48.454 "data_offset": 2048, 00:21:48.454 "data_size": 63488 00:21:48.454 }, 00:21:48.454 { 00:21:48.454 "name": "BaseBdev4", 00:21:48.454 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:48.454 "is_configured": true, 00:21:48.454 "data_offset": 2048, 00:21:48.454 "data_size": 63488 00:21:48.454 } 00:21:48.454 ] 00:21:48.454 }' 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.454 [2024-12-06 13:15:54.963168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.454 [2024-12-06 13:15:54.963384] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:48.454 [2024-12-06 13:15:54.963410] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:48.454 request: 00:21:48.454 { 00:21:48.454 "base_bdev": "BaseBdev1", 00:21:48.454 "raid_bdev": "raid_bdev1", 00:21:48.454 "method": "bdev_raid_add_base_bdev", 00:21:48.454 "req_id": 1 00:21:48.454 } 00:21:48.454 Got JSON-RPC error response 00:21:48.454 response: 00:21:48.454 { 00:21:48.454 "code": -22, 00:21:48.454 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:48.454 } 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.454 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:49.895 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.895 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.895 "name": "raid_bdev1", 00:21:49.895 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:49.895 "strip_size_kb": 0, 00:21:49.895 "state": "online", 00:21:49.895 "raid_level": "raid1", 00:21:49.895 "superblock": true, 00:21:49.896 "num_base_bdevs": 4, 00:21:49.896 "num_base_bdevs_discovered": 2, 00:21:49.896 "num_base_bdevs_operational": 2, 00:21:49.896 "base_bdevs_list": [ 00:21:49.896 { 00:21:49.896 "name": null, 00:21:49.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.896 "is_configured": false, 00:21:49.896 "data_offset": 0, 00:21:49.896 "data_size": 63488 00:21:49.896 }, 00:21:49.896 { 00:21:49.896 "name": null, 00:21:49.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.896 "is_configured": false, 00:21:49.896 "data_offset": 2048, 00:21:49.896 "data_size": 63488 00:21:49.896 }, 00:21:49.896 { 00:21:49.896 "name": "BaseBdev3", 00:21:49.896 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:49.896 "is_configured": true, 00:21:49.896 "data_offset": 2048, 00:21:49.896 "data_size": 63488 00:21:49.896 }, 00:21:49.896 { 00:21:49.896 "name": "BaseBdev4", 00:21:49.896 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:49.896 "is_configured": true, 00:21:49.896 "data_offset": 2048, 00:21:49.896 "data_size": 63488 00:21:49.896 } 00:21:49.896 ] 00:21:49.896 }' 00:21:49.896 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.896 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.154 "name": "raid_bdev1", 00:21:50.154 "uuid": "c7b537c3-42de-4890-8578-09b8f8f67df0", 00:21:50.154 "strip_size_kb": 0, 00:21:50.154 "state": "online", 00:21:50.154 "raid_level": "raid1", 00:21:50.154 "superblock": true, 00:21:50.154 "num_base_bdevs": 4, 00:21:50.154 "num_base_bdevs_discovered": 2, 00:21:50.154 "num_base_bdevs_operational": 2, 00:21:50.154 "base_bdevs_list": [ 00:21:50.154 { 00:21:50.154 "name": null, 00:21:50.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.154 "is_configured": false, 00:21:50.154 "data_offset": 0, 00:21:50.154 "data_size": 63488 00:21:50.154 }, 00:21:50.154 { 00:21:50.154 "name": null, 00:21:50.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.154 "is_configured": false, 00:21:50.154 "data_offset": 2048, 00:21:50.154 "data_size": 63488 00:21:50.154 }, 00:21:50.154 { 00:21:50.154 "name": "BaseBdev3", 00:21:50.154 "uuid": "88077824-111e-5967-9380-0da4f4cfa479", 00:21:50.154 "is_configured": true, 00:21:50.154 "data_offset": 2048, 00:21:50.154 "data_size": 63488 00:21:50.154 }, 00:21:50.154 { 00:21:50.154 "name": "BaseBdev4", 00:21:50.154 "uuid": "75f0ca29-886d-54d2-9df7-ddee34306284", 00:21:50.154 "is_configured": true, 00:21:50.154 "data_offset": 2048, 00:21:50.154 "data_size": 63488 00:21:50.154 } 00:21:50.154 ] 00:21:50.154 }' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79839 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79839 ']' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79839 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79839 00:21:50.154 killing process with pid 79839 00:21:50.154 Received shutdown signal, test time was about 19.189690 seconds 00:21:50.154 00:21:50.154 Latency(us) 00:21:50.154 [2024-12-06T13:15:56.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.154 [2024-12-06T13:15:56.683Z] =================================================================================================================== 00:21:50.154 [2024-12-06T13:15:56.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.154 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79839' 00:21:50.155 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79839 00:21:50.155 [2024-12-06 13:15:56.646194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.155 13:15:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79839 00:21:50.155 [2024-12-06 13:15:56.646386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.155 [2024-12-06 13:15:56.646504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.155 [2024-12-06 13:15:56.646556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:50.722 [2024-12-06 13:15:57.029568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.659 13:15:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:51.659 00:21:51.659 real 0m22.904s 00:21:51.659 user 0m31.177s 00:21:51.659 sys 0m2.459s 00:21:51.659 ************************************ 00:21:51.659 END TEST raid_rebuild_test_sb_io 00:21:51.659 ************************************ 00:21:51.659 13:15:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.659 13:15:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:51.936 13:15:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:51.936 13:15:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:51.936 13:15:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:51.936 13:15:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.936 13:15:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:51.936 ************************************ 00:21:51.936 START TEST raid5f_state_function_test 00:21:51.936 ************************************ 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:51.936 Process raid pid: 80573 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:51.936 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80573 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80573' 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80573 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80573 ']' 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:51.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.937 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.937 [2024-12-06 13:15:58.348146] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:21:51.937 [2024-12-06 13:15:58.348333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.195 [2024-12-06 13:15:58.530433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.196 [2024-12-06 13:15:58.657677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.454 [2024-12-06 13:15:58.863073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.454 [2024-12-06 13:15:58.863119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.020 [2024-12-06 13:15:59.384903] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.020 [2024-12-06 13:15:59.384982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.020 [2024-12-06 13:15:59.385000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.020 [2024-12-06 13:15:59.385016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.020 [2024-12-06 13:15:59.385033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.020 [2024-12-06 13:15:59.385048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.020 "name": "Existed_Raid", 00:21:53.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.020 "strip_size_kb": 64, 00:21:53.020 "state": "configuring", 00:21:53.020 "raid_level": "raid5f", 00:21:53.020 "superblock": false, 00:21:53.020 "num_base_bdevs": 3, 00:21:53.020 "num_base_bdevs_discovered": 0, 00:21:53.020 "num_base_bdevs_operational": 3, 00:21:53.020 "base_bdevs_list": [ 00:21:53.020 { 00:21:53.020 "name": "BaseBdev1", 00:21:53.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.020 "is_configured": false, 00:21:53.020 "data_offset": 0, 00:21:53.020 "data_size": 0 00:21:53.020 }, 00:21:53.020 { 00:21:53.020 "name": "BaseBdev2", 00:21:53.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.020 "is_configured": false, 00:21:53.020 "data_offset": 0, 00:21:53.020 "data_size": 0 00:21:53.020 }, 00:21:53.020 { 00:21:53.020 "name": "BaseBdev3", 00:21:53.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.020 "is_configured": false, 00:21:53.020 "data_offset": 0, 00:21:53.020 "data_size": 0 00:21:53.020 } 00:21:53.020 ] 00:21:53.020 }' 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.020 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 [2024-12-06 13:15:59.953002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.589 [2024-12-06 13:15:59.953046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 [2024-12-06 13:15:59.960984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.589 [2024-12-06 13:15:59.961039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.589 [2024-12-06 13:15:59.961055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.589 [2024-12-06 13:15:59.961071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.589 [2024-12-06 13:15:59.961081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.589 [2024-12-06 13:15:59.961095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 [2024-12-06 13:16:00.005837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.589 BaseBdev1 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 [ 00:21:53.589 { 00:21:53.589 "name": "BaseBdev1", 00:21:53.589 "aliases": [ 00:21:53.589 "23117279-92ff-4a4a-825f-296146957f67" 00:21:53.589 ], 00:21:53.589 "product_name": "Malloc disk", 00:21:53.589 "block_size": 512, 00:21:53.589 "num_blocks": 65536, 00:21:53.589 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:53.589 "assigned_rate_limits": { 00:21:53.589 "rw_ios_per_sec": 0, 00:21:53.589 "rw_mbytes_per_sec": 0, 00:21:53.589 "r_mbytes_per_sec": 0, 00:21:53.589 "w_mbytes_per_sec": 0 00:21:53.589 }, 00:21:53.589 "claimed": true, 00:21:53.589 "claim_type": "exclusive_write", 00:21:53.589 "zoned": false, 00:21:53.589 "supported_io_types": { 00:21:53.589 "read": true, 00:21:53.589 "write": true, 00:21:53.589 "unmap": true, 00:21:53.589 "flush": true, 00:21:53.589 "reset": true, 00:21:53.589 "nvme_admin": false, 00:21:53.589 "nvme_io": false, 00:21:53.589 "nvme_io_md": false, 00:21:53.589 "write_zeroes": true, 00:21:53.589 "zcopy": true, 00:21:53.589 "get_zone_info": false, 00:21:53.589 "zone_management": false, 00:21:53.589 "zone_append": false, 00:21:53.589 "compare": false, 00:21:53.589 "compare_and_write": false, 00:21:53.589 "abort": true, 00:21:53.589 "seek_hole": false, 00:21:53.589 "seek_data": false, 00:21:53.589 "copy": true, 00:21:53.589 "nvme_iov_md": false 00:21:53.589 }, 00:21:53.589 "memory_domains": [ 00:21:53.589 { 00:21:53.589 "dma_device_id": "system", 00:21:53.589 "dma_device_type": 1 00:21:53.589 }, 00:21:53.589 { 00:21:53.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.589 "dma_device_type": 2 00:21:53.589 } 00:21:53.589 ], 00:21:53.589 "driver_specific": {} 00:21:53.589 } 00:21:53.589 ] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.589 "name": "Existed_Raid", 00:21:53.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.589 "strip_size_kb": 64, 00:21:53.589 "state": "configuring", 00:21:53.589 "raid_level": "raid5f", 00:21:53.589 "superblock": false, 00:21:53.589 "num_base_bdevs": 3, 00:21:53.589 "num_base_bdevs_discovered": 1, 00:21:53.589 "num_base_bdevs_operational": 3, 00:21:53.589 "base_bdevs_list": [ 00:21:53.589 { 00:21:53.589 "name": "BaseBdev1", 00:21:53.589 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:53.589 "is_configured": true, 00:21:53.589 "data_offset": 0, 00:21:53.589 "data_size": 65536 00:21:53.589 }, 00:21:53.589 { 00:21:53.589 "name": "BaseBdev2", 00:21:53.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.589 "is_configured": false, 00:21:53.589 "data_offset": 0, 00:21:53.589 "data_size": 0 00:21:53.589 }, 00:21:53.589 { 00:21:53.589 "name": "BaseBdev3", 00:21:53.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.589 "is_configured": false, 00:21:53.589 "data_offset": 0, 00:21:53.589 "data_size": 0 00:21:53.589 } 00:21:53.589 ] 00:21:53.589 }' 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.589 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.157 [2024-12-06 13:16:00.554048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.157 [2024-12-06 13:16:00.554110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.157 [2024-12-06 13:16:00.562075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.157 [2024-12-06 13:16:00.564496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.157 [2024-12-06 13:16:00.564548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.157 [2024-12-06 13:16:00.564564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.157 [2024-12-06 13:16:00.564580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.157 "name": "Existed_Raid", 00:21:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.157 "strip_size_kb": 64, 00:21:54.157 "state": "configuring", 00:21:54.157 "raid_level": "raid5f", 00:21:54.157 "superblock": false, 00:21:54.157 "num_base_bdevs": 3, 00:21:54.157 "num_base_bdevs_discovered": 1, 00:21:54.157 "num_base_bdevs_operational": 3, 00:21:54.157 "base_bdevs_list": [ 00:21:54.157 { 00:21:54.157 "name": "BaseBdev1", 00:21:54.157 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:54.157 "is_configured": true, 00:21:54.157 "data_offset": 0, 00:21:54.157 "data_size": 65536 00:21:54.157 }, 00:21:54.157 { 00:21:54.157 "name": "BaseBdev2", 00:21:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.157 "is_configured": false, 00:21:54.157 "data_offset": 0, 00:21:54.157 "data_size": 0 00:21:54.157 }, 00:21:54.157 { 00:21:54.157 "name": "BaseBdev3", 00:21:54.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.157 "is_configured": false, 00:21:54.157 "data_offset": 0, 00:21:54.157 "data_size": 0 00:21:54.157 } 00:21:54.157 ] 00:21:54.157 }' 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.157 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.727 [2024-12-06 13:16:01.100201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.727 BaseBdev2 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.727 [ 00:21:54.727 { 00:21:54.727 "name": "BaseBdev2", 00:21:54.727 "aliases": [ 00:21:54.727 "00928b83-22dc-47c0-ae2c-22af37f2328c" 00:21:54.727 ], 00:21:54.727 "product_name": "Malloc disk", 00:21:54.727 "block_size": 512, 00:21:54.727 "num_blocks": 65536, 00:21:54.727 "uuid": "00928b83-22dc-47c0-ae2c-22af37f2328c", 00:21:54.727 "assigned_rate_limits": { 00:21:54.727 "rw_ios_per_sec": 0, 00:21:54.727 "rw_mbytes_per_sec": 0, 00:21:54.727 "r_mbytes_per_sec": 0, 00:21:54.727 "w_mbytes_per_sec": 0 00:21:54.727 }, 00:21:54.727 "claimed": true, 00:21:54.727 "claim_type": "exclusive_write", 00:21:54.727 "zoned": false, 00:21:54.727 "supported_io_types": { 00:21:54.727 "read": true, 00:21:54.727 "write": true, 00:21:54.727 "unmap": true, 00:21:54.727 "flush": true, 00:21:54.727 "reset": true, 00:21:54.727 "nvme_admin": false, 00:21:54.727 "nvme_io": false, 00:21:54.727 "nvme_io_md": false, 00:21:54.727 "write_zeroes": true, 00:21:54.727 "zcopy": true, 00:21:54.727 "get_zone_info": false, 00:21:54.727 "zone_management": false, 00:21:54.727 "zone_append": false, 00:21:54.727 "compare": false, 00:21:54.727 "compare_and_write": false, 00:21:54.727 "abort": true, 00:21:54.727 "seek_hole": false, 00:21:54.727 "seek_data": false, 00:21:54.727 "copy": true, 00:21:54.727 "nvme_iov_md": false 00:21:54.727 }, 00:21:54.727 "memory_domains": [ 00:21:54.727 { 00:21:54.727 "dma_device_id": "system", 00:21:54.727 "dma_device_type": 1 00:21:54.727 }, 00:21:54.727 { 00:21:54.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.727 "dma_device_type": 2 00:21:54.727 } 00:21:54.727 ], 00:21:54.727 "driver_specific": {} 00:21:54.727 } 00:21:54.727 ] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.727 "name": "Existed_Raid", 00:21:54.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.727 "strip_size_kb": 64, 00:21:54.727 "state": "configuring", 00:21:54.727 "raid_level": "raid5f", 00:21:54.727 "superblock": false, 00:21:54.727 "num_base_bdevs": 3, 00:21:54.727 "num_base_bdevs_discovered": 2, 00:21:54.727 "num_base_bdevs_operational": 3, 00:21:54.727 "base_bdevs_list": [ 00:21:54.727 { 00:21:54.727 "name": "BaseBdev1", 00:21:54.727 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:54.727 "is_configured": true, 00:21:54.727 "data_offset": 0, 00:21:54.727 "data_size": 65536 00:21:54.727 }, 00:21:54.727 { 00:21:54.727 "name": "BaseBdev2", 00:21:54.727 "uuid": "00928b83-22dc-47c0-ae2c-22af37f2328c", 00:21:54.727 "is_configured": true, 00:21:54.727 "data_offset": 0, 00:21:54.727 "data_size": 65536 00:21:54.727 }, 00:21:54.727 { 00:21:54.727 "name": "BaseBdev3", 00:21:54.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.727 "is_configured": false, 00:21:54.727 "data_offset": 0, 00:21:54.727 "data_size": 0 00:21:54.727 } 00:21:54.727 ] 00:21:54.727 }' 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.727 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.298 [2024-12-06 13:16:01.715576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:55.298 [2024-12-06 13:16:01.715863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:55.298 [2024-12-06 13:16:01.715899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:55.298 [2024-12-06 13:16:01.716249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:55.298 [2024-12-06 13:16:01.721527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:55.298 [2024-12-06 13:16:01.721553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:55.298 [2024-12-06 13:16:01.721893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.298 BaseBdev3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.298 [ 00:21:55.298 { 00:21:55.298 "name": "BaseBdev3", 00:21:55.298 "aliases": [ 00:21:55.298 "48882793-8274-4f62-b9cc-e894302c10f3" 00:21:55.298 ], 00:21:55.298 "product_name": "Malloc disk", 00:21:55.298 "block_size": 512, 00:21:55.298 "num_blocks": 65536, 00:21:55.298 "uuid": "48882793-8274-4f62-b9cc-e894302c10f3", 00:21:55.298 "assigned_rate_limits": { 00:21:55.298 "rw_ios_per_sec": 0, 00:21:55.298 "rw_mbytes_per_sec": 0, 00:21:55.298 "r_mbytes_per_sec": 0, 00:21:55.298 "w_mbytes_per_sec": 0 00:21:55.298 }, 00:21:55.298 "claimed": true, 00:21:55.298 "claim_type": "exclusive_write", 00:21:55.298 "zoned": false, 00:21:55.298 "supported_io_types": { 00:21:55.298 "read": true, 00:21:55.298 "write": true, 00:21:55.298 "unmap": true, 00:21:55.298 "flush": true, 00:21:55.298 "reset": true, 00:21:55.298 "nvme_admin": false, 00:21:55.298 "nvme_io": false, 00:21:55.298 "nvme_io_md": false, 00:21:55.298 "write_zeroes": true, 00:21:55.298 "zcopy": true, 00:21:55.298 "get_zone_info": false, 00:21:55.298 "zone_management": false, 00:21:55.298 "zone_append": false, 00:21:55.298 "compare": false, 00:21:55.298 "compare_and_write": false, 00:21:55.298 "abort": true, 00:21:55.298 "seek_hole": false, 00:21:55.298 "seek_data": false, 00:21:55.298 "copy": true, 00:21:55.298 "nvme_iov_md": false 00:21:55.298 }, 00:21:55.298 "memory_domains": [ 00:21:55.298 { 00:21:55.298 "dma_device_id": "system", 00:21:55.298 "dma_device_type": 1 00:21:55.298 }, 00:21:55.298 { 00:21:55.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.298 "dma_device_type": 2 00:21:55.298 } 00:21:55.298 ], 00:21:55.298 "driver_specific": {} 00:21:55.298 } 00:21:55.298 ] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.298 "name": "Existed_Raid", 00:21:55.298 "uuid": "80bad649-80ed-453e-8952-589790011d47", 00:21:55.298 "strip_size_kb": 64, 00:21:55.298 "state": "online", 00:21:55.298 "raid_level": "raid5f", 00:21:55.298 "superblock": false, 00:21:55.298 "num_base_bdevs": 3, 00:21:55.298 "num_base_bdevs_discovered": 3, 00:21:55.298 "num_base_bdevs_operational": 3, 00:21:55.298 "base_bdevs_list": [ 00:21:55.298 { 00:21:55.298 "name": "BaseBdev1", 00:21:55.298 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:55.298 "is_configured": true, 00:21:55.298 "data_offset": 0, 00:21:55.298 "data_size": 65536 00:21:55.298 }, 00:21:55.298 { 00:21:55.298 "name": "BaseBdev2", 00:21:55.298 "uuid": "00928b83-22dc-47c0-ae2c-22af37f2328c", 00:21:55.298 "is_configured": true, 00:21:55.298 "data_offset": 0, 00:21:55.298 "data_size": 65536 00:21:55.298 }, 00:21:55.298 { 00:21:55.298 "name": "BaseBdev3", 00:21:55.298 "uuid": "48882793-8274-4f62-b9cc-e894302c10f3", 00:21:55.298 "is_configured": true, 00:21:55.298 "data_offset": 0, 00:21:55.298 "data_size": 65536 00:21:55.298 } 00:21:55.298 ] 00:21:55.298 }' 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.298 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:55.865 [2024-12-06 13:16:02.275907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.865 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.865 "name": "Existed_Raid", 00:21:55.865 "aliases": [ 00:21:55.865 "80bad649-80ed-453e-8952-589790011d47" 00:21:55.865 ], 00:21:55.865 "product_name": "Raid Volume", 00:21:55.865 "block_size": 512, 00:21:55.865 "num_blocks": 131072, 00:21:55.865 "uuid": "80bad649-80ed-453e-8952-589790011d47", 00:21:55.865 "assigned_rate_limits": { 00:21:55.865 "rw_ios_per_sec": 0, 00:21:55.865 "rw_mbytes_per_sec": 0, 00:21:55.865 "r_mbytes_per_sec": 0, 00:21:55.865 "w_mbytes_per_sec": 0 00:21:55.865 }, 00:21:55.865 "claimed": false, 00:21:55.865 "zoned": false, 00:21:55.865 "supported_io_types": { 00:21:55.865 "read": true, 00:21:55.865 "write": true, 00:21:55.865 "unmap": false, 00:21:55.865 "flush": false, 00:21:55.865 "reset": true, 00:21:55.865 "nvme_admin": false, 00:21:55.865 "nvme_io": false, 00:21:55.865 "nvme_io_md": false, 00:21:55.865 "write_zeroes": true, 00:21:55.865 "zcopy": false, 00:21:55.865 "get_zone_info": false, 00:21:55.865 "zone_management": false, 00:21:55.865 "zone_append": false, 00:21:55.865 "compare": false, 00:21:55.865 "compare_and_write": false, 00:21:55.866 "abort": false, 00:21:55.866 "seek_hole": false, 00:21:55.866 "seek_data": false, 00:21:55.866 "copy": false, 00:21:55.866 "nvme_iov_md": false 00:21:55.866 }, 00:21:55.866 "driver_specific": { 00:21:55.866 "raid": { 00:21:55.866 "uuid": "80bad649-80ed-453e-8952-589790011d47", 00:21:55.866 "strip_size_kb": 64, 00:21:55.866 "state": "online", 00:21:55.866 "raid_level": "raid5f", 00:21:55.866 "superblock": false, 00:21:55.866 "num_base_bdevs": 3, 00:21:55.866 "num_base_bdevs_discovered": 3, 00:21:55.866 "num_base_bdevs_operational": 3, 00:21:55.866 "base_bdevs_list": [ 00:21:55.866 { 00:21:55.866 "name": "BaseBdev1", 00:21:55.866 "uuid": "23117279-92ff-4a4a-825f-296146957f67", 00:21:55.866 "is_configured": true, 00:21:55.866 "data_offset": 0, 00:21:55.866 "data_size": 65536 00:21:55.866 }, 00:21:55.866 { 00:21:55.866 "name": "BaseBdev2", 00:21:55.866 "uuid": "00928b83-22dc-47c0-ae2c-22af37f2328c", 00:21:55.866 "is_configured": true, 00:21:55.866 "data_offset": 0, 00:21:55.866 "data_size": 65536 00:21:55.866 }, 00:21:55.866 { 00:21:55.866 "name": "BaseBdev3", 00:21:55.866 "uuid": "48882793-8274-4f62-b9cc-e894302c10f3", 00:21:55.866 "is_configured": true, 00:21:55.866 "data_offset": 0, 00:21:55.866 "data_size": 65536 00:21:55.866 } 00:21:55.866 ] 00:21:55.866 } 00:21:55.866 } 00:21:55.866 }' 00:21:55.866 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.866 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:55.866 BaseBdev2 00:21:55.866 BaseBdev3' 00:21:55.866 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.124 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.125 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.125 [2024-12-06 13:16:02.595780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.383 "name": "Existed_Raid", 00:21:56.383 "uuid": "80bad649-80ed-453e-8952-589790011d47", 00:21:56.383 "strip_size_kb": 64, 00:21:56.383 "state": "online", 00:21:56.383 "raid_level": "raid5f", 00:21:56.383 "superblock": false, 00:21:56.383 "num_base_bdevs": 3, 00:21:56.383 "num_base_bdevs_discovered": 2, 00:21:56.383 "num_base_bdevs_operational": 2, 00:21:56.383 "base_bdevs_list": [ 00:21:56.383 { 00:21:56.383 "name": null, 00:21:56.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.383 "is_configured": false, 00:21:56.383 "data_offset": 0, 00:21:56.383 "data_size": 65536 00:21:56.383 }, 00:21:56.383 { 00:21:56.383 "name": "BaseBdev2", 00:21:56.383 "uuid": "00928b83-22dc-47c0-ae2c-22af37f2328c", 00:21:56.383 "is_configured": true, 00:21:56.383 "data_offset": 0, 00:21:56.383 "data_size": 65536 00:21:56.383 }, 00:21:56.383 { 00:21:56.383 "name": "BaseBdev3", 00:21:56.383 "uuid": "48882793-8274-4f62-b9cc-e894302c10f3", 00:21:56.383 "is_configured": true, 00:21:56.383 "data_offset": 0, 00:21:56.383 "data_size": 65536 00:21:56.383 } 00:21:56.383 ] 00:21:56.383 }' 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.383 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.966 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.967 [2024-12-06 13:16:03.236073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:56.967 [2024-12-06 13:16:03.236202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.967 [2024-12-06 13:16:03.321924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.967 [2024-12-06 13:16:03.402014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:56.967 [2024-12-06 13:16:03.402094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:56.967 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 BaseBdev2 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 [ 00:21:57.226 { 00:21:57.226 "name": "BaseBdev2", 00:21:57.226 "aliases": [ 00:21:57.226 "d18dee24-e4c7-4cc4-a8fc-ce09898845f2" 00:21:57.226 ], 00:21:57.226 "product_name": "Malloc disk", 00:21:57.226 "block_size": 512, 00:21:57.226 "num_blocks": 65536, 00:21:57.226 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:57.226 "assigned_rate_limits": { 00:21:57.226 "rw_ios_per_sec": 0, 00:21:57.226 "rw_mbytes_per_sec": 0, 00:21:57.226 "r_mbytes_per_sec": 0, 00:21:57.226 "w_mbytes_per_sec": 0 00:21:57.226 }, 00:21:57.226 "claimed": false, 00:21:57.226 "zoned": false, 00:21:57.226 "supported_io_types": { 00:21:57.226 "read": true, 00:21:57.226 "write": true, 00:21:57.226 "unmap": true, 00:21:57.226 "flush": true, 00:21:57.226 "reset": true, 00:21:57.226 "nvme_admin": false, 00:21:57.226 "nvme_io": false, 00:21:57.226 "nvme_io_md": false, 00:21:57.226 "write_zeroes": true, 00:21:57.226 "zcopy": true, 00:21:57.226 "get_zone_info": false, 00:21:57.226 "zone_management": false, 00:21:57.226 "zone_append": false, 00:21:57.226 "compare": false, 00:21:57.226 "compare_and_write": false, 00:21:57.226 "abort": true, 00:21:57.226 "seek_hole": false, 00:21:57.226 "seek_data": false, 00:21:57.226 "copy": true, 00:21:57.226 "nvme_iov_md": false 00:21:57.226 }, 00:21:57.226 "memory_domains": [ 00:21:57.226 { 00:21:57.226 "dma_device_id": "system", 00:21:57.226 "dma_device_type": 1 00:21:57.226 }, 00:21:57.226 { 00:21:57.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.226 "dma_device_type": 2 00:21:57.226 } 00:21:57.226 ], 00:21:57.226 "driver_specific": {} 00:21:57.226 } 00:21:57.226 ] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 BaseBdev3 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 [ 00:21:57.226 { 00:21:57.226 "name": "BaseBdev3", 00:21:57.226 "aliases": [ 00:21:57.226 "4ed941bc-8617-4c74-ba12-5a30ae116574" 00:21:57.226 ], 00:21:57.226 "product_name": "Malloc disk", 00:21:57.226 "block_size": 512, 00:21:57.226 "num_blocks": 65536, 00:21:57.226 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:57.226 "assigned_rate_limits": { 00:21:57.226 "rw_ios_per_sec": 0, 00:21:57.226 "rw_mbytes_per_sec": 0, 00:21:57.226 "r_mbytes_per_sec": 0, 00:21:57.226 "w_mbytes_per_sec": 0 00:21:57.226 }, 00:21:57.226 "claimed": false, 00:21:57.226 "zoned": false, 00:21:57.226 "supported_io_types": { 00:21:57.226 "read": true, 00:21:57.226 "write": true, 00:21:57.226 "unmap": true, 00:21:57.226 "flush": true, 00:21:57.226 "reset": true, 00:21:57.226 "nvme_admin": false, 00:21:57.226 "nvme_io": false, 00:21:57.226 "nvme_io_md": false, 00:21:57.226 "write_zeroes": true, 00:21:57.226 "zcopy": true, 00:21:57.226 "get_zone_info": false, 00:21:57.226 "zone_management": false, 00:21:57.226 "zone_append": false, 00:21:57.226 "compare": false, 00:21:57.226 "compare_and_write": false, 00:21:57.226 "abort": true, 00:21:57.226 "seek_hole": false, 00:21:57.226 "seek_data": false, 00:21:57.226 "copy": true, 00:21:57.226 "nvme_iov_md": false 00:21:57.226 }, 00:21:57.226 "memory_domains": [ 00:21:57.226 { 00:21:57.226 "dma_device_id": "system", 00:21:57.226 "dma_device_type": 1 00:21:57.226 }, 00:21:57.226 { 00:21:57.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.226 "dma_device_type": 2 00:21:57.226 } 00:21:57.226 ], 00:21:57.226 "driver_specific": {} 00:21:57.226 } 00:21:57.226 ] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:57.226 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 [2024-12-06 13:16:03.712382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:57.227 [2024-12-06 13:16:03.712439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:57.227 [2024-12-06 13:16:03.712499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.227 [2024-12-06 13:16:03.714934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.486 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.486 "name": "Existed_Raid", 00:21:57.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.486 "strip_size_kb": 64, 00:21:57.486 "state": "configuring", 00:21:57.486 "raid_level": "raid5f", 00:21:57.486 "superblock": false, 00:21:57.486 "num_base_bdevs": 3, 00:21:57.486 "num_base_bdevs_discovered": 2, 00:21:57.486 "num_base_bdevs_operational": 3, 00:21:57.486 "base_bdevs_list": [ 00:21:57.486 { 00:21:57.486 "name": "BaseBdev1", 00:21:57.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.486 "is_configured": false, 00:21:57.486 "data_offset": 0, 00:21:57.486 "data_size": 0 00:21:57.486 }, 00:21:57.486 { 00:21:57.486 "name": "BaseBdev2", 00:21:57.486 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:57.486 "is_configured": true, 00:21:57.486 "data_offset": 0, 00:21:57.486 "data_size": 65536 00:21:57.486 }, 00:21:57.486 { 00:21:57.486 "name": "BaseBdev3", 00:21:57.486 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:57.486 "is_configured": true, 00:21:57.486 "data_offset": 0, 00:21:57.486 "data_size": 65536 00:21:57.486 } 00:21:57.486 ] 00:21:57.486 }' 00:21:57.486 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.486 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.744 [2024-12-06 13:16:04.236552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.744 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.002 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.002 "name": "Existed_Raid", 00:21:58.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.002 "strip_size_kb": 64, 00:21:58.002 "state": "configuring", 00:21:58.002 "raid_level": "raid5f", 00:21:58.002 "superblock": false, 00:21:58.002 "num_base_bdevs": 3, 00:21:58.002 "num_base_bdevs_discovered": 1, 00:21:58.002 "num_base_bdevs_operational": 3, 00:21:58.002 "base_bdevs_list": [ 00:21:58.002 { 00:21:58.002 "name": "BaseBdev1", 00:21:58.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.002 "is_configured": false, 00:21:58.002 "data_offset": 0, 00:21:58.002 "data_size": 0 00:21:58.002 }, 00:21:58.002 { 00:21:58.002 "name": null, 00:21:58.002 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:58.002 "is_configured": false, 00:21:58.002 "data_offset": 0, 00:21:58.002 "data_size": 65536 00:21:58.002 }, 00:21:58.002 { 00:21:58.002 "name": "BaseBdev3", 00:21:58.002 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:58.002 "is_configured": true, 00:21:58.003 "data_offset": 0, 00:21:58.003 "data_size": 65536 00:21:58.003 } 00:21:58.003 ] 00:21:58.003 }' 00:21:58.003 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.003 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.261 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.520 [2024-12-06 13:16:04.790272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.520 BaseBdev1 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.520 [ 00:21:58.520 { 00:21:58.520 "name": "BaseBdev1", 00:21:58.520 "aliases": [ 00:21:58.520 "fd3e4d35-6062-4826-93cb-0df4f963f1d8" 00:21:58.520 ], 00:21:58.520 "product_name": "Malloc disk", 00:21:58.520 "block_size": 512, 00:21:58.520 "num_blocks": 65536, 00:21:58.520 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:21:58.520 "assigned_rate_limits": { 00:21:58.520 "rw_ios_per_sec": 0, 00:21:58.520 "rw_mbytes_per_sec": 0, 00:21:58.520 "r_mbytes_per_sec": 0, 00:21:58.520 "w_mbytes_per_sec": 0 00:21:58.520 }, 00:21:58.520 "claimed": true, 00:21:58.520 "claim_type": "exclusive_write", 00:21:58.520 "zoned": false, 00:21:58.520 "supported_io_types": { 00:21:58.520 "read": true, 00:21:58.520 "write": true, 00:21:58.520 "unmap": true, 00:21:58.520 "flush": true, 00:21:58.520 "reset": true, 00:21:58.520 "nvme_admin": false, 00:21:58.520 "nvme_io": false, 00:21:58.520 "nvme_io_md": false, 00:21:58.520 "write_zeroes": true, 00:21:58.520 "zcopy": true, 00:21:58.520 "get_zone_info": false, 00:21:58.520 "zone_management": false, 00:21:58.520 "zone_append": false, 00:21:58.520 "compare": false, 00:21:58.520 "compare_and_write": false, 00:21:58.520 "abort": true, 00:21:58.520 "seek_hole": false, 00:21:58.520 "seek_data": false, 00:21:58.520 "copy": true, 00:21:58.520 "nvme_iov_md": false 00:21:58.520 }, 00:21:58.520 "memory_domains": [ 00:21:58.520 { 00:21:58.520 "dma_device_id": "system", 00:21:58.520 "dma_device_type": 1 00:21:58.520 }, 00:21:58.520 { 00:21:58.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.520 "dma_device_type": 2 00:21:58.520 } 00:21:58.520 ], 00:21:58.520 "driver_specific": {} 00:21:58.520 } 00:21:58.520 ] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.520 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.520 "name": "Existed_Raid", 00:21:58.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.520 "strip_size_kb": 64, 00:21:58.520 "state": "configuring", 00:21:58.520 "raid_level": "raid5f", 00:21:58.520 "superblock": false, 00:21:58.520 "num_base_bdevs": 3, 00:21:58.520 "num_base_bdevs_discovered": 2, 00:21:58.520 "num_base_bdevs_operational": 3, 00:21:58.520 "base_bdevs_list": [ 00:21:58.520 { 00:21:58.520 "name": "BaseBdev1", 00:21:58.520 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:21:58.520 "is_configured": true, 00:21:58.521 "data_offset": 0, 00:21:58.521 "data_size": 65536 00:21:58.521 }, 00:21:58.521 { 00:21:58.521 "name": null, 00:21:58.521 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:58.521 "is_configured": false, 00:21:58.521 "data_offset": 0, 00:21:58.521 "data_size": 65536 00:21:58.521 }, 00:21:58.521 { 00:21:58.521 "name": "BaseBdev3", 00:21:58.521 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:58.521 "is_configured": true, 00:21:58.521 "data_offset": 0, 00:21:58.521 "data_size": 65536 00:21:58.521 } 00:21:58.521 ] 00:21:58.521 }' 00:21:58.521 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.521 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.084 [2024-12-06 13:16:05.418525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.084 "name": "Existed_Raid", 00:21:59.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.084 "strip_size_kb": 64, 00:21:59.084 "state": "configuring", 00:21:59.084 "raid_level": "raid5f", 00:21:59.084 "superblock": false, 00:21:59.084 "num_base_bdevs": 3, 00:21:59.084 "num_base_bdevs_discovered": 1, 00:21:59.084 "num_base_bdevs_operational": 3, 00:21:59.084 "base_bdevs_list": [ 00:21:59.084 { 00:21:59.084 "name": "BaseBdev1", 00:21:59.084 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:21:59.084 "is_configured": true, 00:21:59.084 "data_offset": 0, 00:21:59.084 "data_size": 65536 00:21:59.084 }, 00:21:59.084 { 00:21:59.084 "name": null, 00:21:59.084 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:59.084 "is_configured": false, 00:21:59.084 "data_offset": 0, 00:21:59.084 "data_size": 65536 00:21:59.084 }, 00:21:59.084 { 00:21:59.084 "name": null, 00:21:59.084 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:59.084 "is_configured": false, 00:21:59.084 "data_offset": 0, 00:21:59.084 "data_size": 65536 00:21:59.084 } 00:21:59.084 ] 00:21:59.084 }' 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.084 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.649 [2024-12-06 13:16:05.970690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.649 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.649 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.649 "name": "Existed_Raid", 00:21:59.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.649 "strip_size_kb": 64, 00:21:59.649 "state": "configuring", 00:21:59.649 "raid_level": "raid5f", 00:21:59.649 "superblock": false, 00:21:59.649 "num_base_bdevs": 3, 00:21:59.649 "num_base_bdevs_discovered": 2, 00:21:59.649 "num_base_bdevs_operational": 3, 00:21:59.649 "base_bdevs_list": [ 00:21:59.649 { 00:21:59.649 "name": "BaseBdev1", 00:21:59.649 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:21:59.649 "is_configured": true, 00:21:59.649 "data_offset": 0, 00:21:59.649 "data_size": 65536 00:21:59.649 }, 00:21:59.649 { 00:21:59.649 "name": null, 00:21:59.649 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:21:59.649 "is_configured": false, 00:21:59.649 "data_offset": 0, 00:21:59.649 "data_size": 65536 00:21:59.649 }, 00:21:59.649 { 00:21:59.649 "name": "BaseBdev3", 00:21:59.649 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:21:59.649 "is_configured": true, 00:21:59.649 "data_offset": 0, 00:21:59.649 "data_size": 65536 00:21:59.649 } 00:21:59.649 ] 00:21:59.649 }' 00:21:59.649 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.649 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.214 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.215 [2024-12-06 13:16:06.538839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.215 "name": "Existed_Raid", 00:22:00.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.215 "strip_size_kb": 64, 00:22:00.215 "state": "configuring", 00:22:00.215 "raid_level": "raid5f", 00:22:00.215 "superblock": false, 00:22:00.215 "num_base_bdevs": 3, 00:22:00.215 "num_base_bdevs_discovered": 1, 00:22:00.215 "num_base_bdevs_operational": 3, 00:22:00.215 "base_bdevs_list": [ 00:22:00.215 { 00:22:00.215 "name": null, 00:22:00.215 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:22:00.215 "is_configured": false, 00:22:00.215 "data_offset": 0, 00:22:00.215 "data_size": 65536 00:22:00.215 }, 00:22:00.215 { 00:22:00.215 "name": null, 00:22:00.215 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:22:00.215 "is_configured": false, 00:22:00.215 "data_offset": 0, 00:22:00.215 "data_size": 65536 00:22:00.215 }, 00:22:00.215 { 00:22:00.215 "name": "BaseBdev3", 00:22:00.215 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:22:00.215 "is_configured": true, 00:22:00.215 "data_offset": 0, 00:22:00.215 "data_size": 65536 00:22:00.215 } 00:22:00.215 ] 00:22:00.215 }' 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.215 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.863 [2024-12-06 13:16:07.218647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.863 "name": "Existed_Raid", 00:22:00.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.863 "strip_size_kb": 64, 00:22:00.863 "state": "configuring", 00:22:00.863 "raid_level": "raid5f", 00:22:00.863 "superblock": false, 00:22:00.863 "num_base_bdevs": 3, 00:22:00.863 "num_base_bdevs_discovered": 2, 00:22:00.863 "num_base_bdevs_operational": 3, 00:22:00.863 "base_bdevs_list": [ 00:22:00.863 { 00:22:00.863 "name": null, 00:22:00.863 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:22:00.863 "is_configured": false, 00:22:00.863 "data_offset": 0, 00:22:00.863 "data_size": 65536 00:22:00.863 }, 00:22:00.863 { 00:22:00.863 "name": "BaseBdev2", 00:22:00.863 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:22:00.863 "is_configured": true, 00:22:00.863 "data_offset": 0, 00:22:00.863 "data_size": 65536 00:22:00.863 }, 00:22:00.863 { 00:22:00.863 "name": "BaseBdev3", 00:22:00.863 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:22:00.863 "is_configured": true, 00:22:00.863 "data_offset": 0, 00:22:00.863 "data_size": 65536 00:22:00.863 } 00:22:00.863 ] 00:22:00.863 }' 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.863 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fd3e4d35-6062-4826-93cb-0df4f963f1d8 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 [2024-12-06 13:16:07.840539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:01.433 [2024-12-06 13:16:07.840603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:01.433 [2024-12-06 13:16:07.840619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:01.433 [2024-12-06 13:16:07.840927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:01.433 [2024-12-06 13:16:07.845774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:01.433 [2024-12-06 13:16:07.845803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:01.433 [2024-12-06 13:16:07.846119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.433 NewBaseBdev 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 [ 00:22:01.433 { 00:22:01.433 "name": "NewBaseBdev", 00:22:01.433 "aliases": [ 00:22:01.433 "fd3e4d35-6062-4826-93cb-0df4f963f1d8" 00:22:01.433 ], 00:22:01.433 "product_name": "Malloc disk", 00:22:01.433 "block_size": 512, 00:22:01.433 "num_blocks": 65536, 00:22:01.433 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:22:01.433 "assigned_rate_limits": { 00:22:01.433 "rw_ios_per_sec": 0, 00:22:01.433 "rw_mbytes_per_sec": 0, 00:22:01.433 "r_mbytes_per_sec": 0, 00:22:01.433 "w_mbytes_per_sec": 0 00:22:01.433 }, 00:22:01.433 "claimed": true, 00:22:01.433 "claim_type": "exclusive_write", 00:22:01.433 "zoned": false, 00:22:01.433 "supported_io_types": { 00:22:01.433 "read": true, 00:22:01.433 "write": true, 00:22:01.433 "unmap": true, 00:22:01.433 "flush": true, 00:22:01.433 "reset": true, 00:22:01.433 "nvme_admin": false, 00:22:01.433 "nvme_io": false, 00:22:01.433 "nvme_io_md": false, 00:22:01.433 "write_zeroes": true, 00:22:01.433 "zcopy": true, 00:22:01.433 "get_zone_info": false, 00:22:01.433 "zone_management": false, 00:22:01.433 "zone_append": false, 00:22:01.433 "compare": false, 00:22:01.433 "compare_and_write": false, 00:22:01.433 "abort": true, 00:22:01.433 "seek_hole": false, 00:22:01.433 "seek_data": false, 00:22:01.433 "copy": true, 00:22:01.433 "nvme_iov_md": false 00:22:01.433 }, 00:22:01.433 "memory_domains": [ 00:22:01.433 { 00:22:01.433 "dma_device_id": "system", 00:22:01.433 "dma_device_type": 1 00:22:01.433 }, 00:22:01.433 { 00:22:01.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.433 "dma_device_type": 2 00:22:01.433 } 00:22:01.433 ], 00:22:01.433 "driver_specific": {} 00:22:01.433 } 00:22:01.433 ] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.433 "name": "Existed_Raid", 00:22:01.433 "uuid": "a474b381-4b49-4108-b6dc-6cd9e43cbba3", 00:22:01.433 "strip_size_kb": 64, 00:22:01.433 "state": "online", 00:22:01.433 "raid_level": "raid5f", 00:22:01.433 "superblock": false, 00:22:01.433 "num_base_bdevs": 3, 00:22:01.433 "num_base_bdevs_discovered": 3, 00:22:01.433 "num_base_bdevs_operational": 3, 00:22:01.433 "base_bdevs_list": [ 00:22:01.433 { 00:22:01.433 "name": "NewBaseBdev", 00:22:01.433 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:22:01.433 "is_configured": true, 00:22:01.433 "data_offset": 0, 00:22:01.433 "data_size": 65536 00:22:01.433 }, 00:22:01.433 { 00:22:01.433 "name": "BaseBdev2", 00:22:01.433 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:22:01.433 "is_configured": true, 00:22:01.433 "data_offset": 0, 00:22:01.433 "data_size": 65536 00:22:01.433 }, 00:22:01.433 { 00:22:01.433 "name": "BaseBdev3", 00:22:01.433 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:22:01.433 "is_configured": true, 00:22:01.433 "data_offset": 0, 00:22:01.433 "data_size": 65536 00:22:01.433 } 00:22:01.433 ] 00:22:01.433 }' 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.433 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.002 [2024-12-06 13:16:08.412055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.002 "name": "Existed_Raid", 00:22:02.002 "aliases": [ 00:22:02.002 "a474b381-4b49-4108-b6dc-6cd9e43cbba3" 00:22:02.002 ], 00:22:02.002 "product_name": "Raid Volume", 00:22:02.002 "block_size": 512, 00:22:02.002 "num_blocks": 131072, 00:22:02.002 "uuid": "a474b381-4b49-4108-b6dc-6cd9e43cbba3", 00:22:02.002 "assigned_rate_limits": { 00:22:02.002 "rw_ios_per_sec": 0, 00:22:02.002 "rw_mbytes_per_sec": 0, 00:22:02.002 "r_mbytes_per_sec": 0, 00:22:02.002 "w_mbytes_per_sec": 0 00:22:02.002 }, 00:22:02.002 "claimed": false, 00:22:02.002 "zoned": false, 00:22:02.002 "supported_io_types": { 00:22:02.002 "read": true, 00:22:02.002 "write": true, 00:22:02.002 "unmap": false, 00:22:02.002 "flush": false, 00:22:02.002 "reset": true, 00:22:02.002 "nvme_admin": false, 00:22:02.002 "nvme_io": false, 00:22:02.002 "nvme_io_md": false, 00:22:02.002 "write_zeroes": true, 00:22:02.002 "zcopy": false, 00:22:02.002 "get_zone_info": false, 00:22:02.002 "zone_management": false, 00:22:02.002 "zone_append": false, 00:22:02.002 "compare": false, 00:22:02.002 "compare_and_write": false, 00:22:02.002 "abort": false, 00:22:02.002 "seek_hole": false, 00:22:02.002 "seek_data": false, 00:22:02.002 "copy": false, 00:22:02.002 "nvme_iov_md": false 00:22:02.002 }, 00:22:02.002 "driver_specific": { 00:22:02.002 "raid": { 00:22:02.002 "uuid": "a474b381-4b49-4108-b6dc-6cd9e43cbba3", 00:22:02.002 "strip_size_kb": 64, 00:22:02.002 "state": "online", 00:22:02.002 "raid_level": "raid5f", 00:22:02.002 "superblock": false, 00:22:02.002 "num_base_bdevs": 3, 00:22:02.002 "num_base_bdevs_discovered": 3, 00:22:02.002 "num_base_bdevs_operational": 3, 00:22:02.002 "base_bdevs_list": [ 00:22:02.002 { 00:22:02.002 "name": "NewBaseBdev", 00:22:02.002 "uuid": "fd3e4d35-6062-4826-93cb-0df4f963f1d8", 00:22:02.002 "is_configured": true, 00:22:02.002 "data_offset": 0, 00:22:02.002 "data_size": 65536 00:22:02.002 }, 00:22:02.002 { 00:22:02.002 "name": "BaseBdev2", 00:22:02.002 "uuid": "d18dee24-e4c7-4cc4-a8fc-ce09898845f2", 00:22:02.002 "is_configured": true, 00:22:02.002 "data_offset": 0, 00:22:02.002 "data_size": 65536 00:22:02.002 }, 00:22:02.002 { 00:22:02.002 "name": "BaseBdev3", 00:22:02.002 "uuid": "4ed941bc-8617-4c74-ba12-5a30ae116574", 00:22:02.002 "is_configured": true, 00:22:02.002 "data_offset": 0, 00:22:02.002 "data_size": 65536 00:22:02.002 } 00:22:02.002 ] 00:22:02.002 } 00:22:02.002 } 00:22:02.002 }' 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:02.002 BaseBdev2 00:22:02.002 BaseBdev3' 00:22:02.002 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.261 [2024-12-06 13:16:08.727878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.261 [2024-12-06 13:16:08.727914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.261 [2024-12-06 13:16:08.728022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.261 [2024-12-06 13:16:08.728385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.261 [2024-12-06 13:16:08.728408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80573 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80573 ']' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80573 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80573 00:22:02.261 killing process with pid 80573 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80573' 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80573 00:22:02.261 [2024-12-06 13:16:08.775310] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:02.261 13:16:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80573 00:22:02.828 [2024-12-06 13:16:09.051334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:03.765 00:22:03.765 real 0m11.882s 00:22:03.765 user 0m19.670s 00:22:03.765 sys 0m1.672s 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.765 ************************************ 00:22:03.765 END TEST raid5f_state_function_test 00:22:03.765 ************************************ 00:22:03.765 13:16:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:03.765 13:16:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:03.765 13:16:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.765 13:16:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.765 ************************************ 00:22:03.765 START TEST raid5f_state_function_test_sb 00:22:03.765 ************************************ 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:03.765 Process raid pid: 81207 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81207 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81207' 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81207 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81207 ']' 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.765 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.024 [2024-12-06 13:16:10.293151] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:04.024 [2024-12-06 13:16:10.294128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.024 [2024-12-06 13:16:10.489263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.282 [2024-12-06 13:16:10.619476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.540 [2024-12-06 13:16:10.826572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.540 [2024-12-06 13:16:10.826773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.105 [2024-12-06 13:16:11.358919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.105 [2024-12-06 13:16:11.359133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.105 [2024-12-06 13:16:11.359254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.105 [2024-12-06 13:16:11.359387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.105 [2024-12-06 13:16:11.359519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.105 [2024-12-06 13:16:11.359580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.105 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.106 "name": "Existed_Raid", 00:22:05.106 "uuid": "fc9d8476-700b-4145-9544-7d59af73d5c1", 00:22:05.106 "strip_size_kb": 64, 00:22:05.106 "state": "configuring", 00:22:05.106 "raid_level": "raid5f", 00:22:05.106 "superblock": true, 00:22:05.106 "num_base_bdevs": 3, 00:22:05.106 "num_base_bdevs_discovered": 0, 00:22:05.106 "num_base_bdevs_operational": 3, 00:22:05.106 "base_bdevs_list": [ 00:22:05.106 { 00:22:05.106 "name": "BaseBdev1", 00:22:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.106 "is_configured": false, 00:22:05.106 "data_offset": 0, 00:22:05.106 "data_size": 0 00:22:05.106 }, 00:22:05.106 { 00:22:05.106 "name": "BaseBdev2", 00:22:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.106 "is_configured": false, 00:22:05.106 "data_offset": 0, 00:22:05.106 "data_size": 0 00:22:05.106 }, 00:22:05.106 { 00:22:05.106 "name": "BaseBdev3", 00:22:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.106 "is_configured": false, 00:22:05.106 "data_offset": 0, 00:22:05.106 "data_size": 0 00:22:05.106 } 00:22:05.106 ] 00:22:05.106 }' 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.106 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 [2024-12-06 13:16:11.915004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:05.671 [2024-12-06 13:16:11.915048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 [2024-12-06 13:16:11.926990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.671 [2024-12-06 13:16:11.927046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.671 [2024-12-06 13:16:11.927061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.671 [2024-12-06 13:16:11.927078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.671 [2024-12-06 13:16:11.927088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.671 [2024-12-06 13:16:11.927102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.671 [2024-12-06 13:16:11.976562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.671 BaseBdev1 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:05.671 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.672 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.672 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.672 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:05.672 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.672 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.672 [ 00:22:05.672 { 00:22:05.672 "name": "BaseBdev1", 00:22:05.672 "aliases": [ 00:22:05.672 "48424e96-433d-43ab-9c4d-52a519197039" 00:22:05.672 ], 00:22:05.672 "product_name": "Malloc disk", 00:22:05.672 "block_size": 512, 00:22:05.672 "num_blocks": 65536, 00:22:05.672 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:05.672 "assigned_rate_limits": { 00:22:05.672 "rw_ios_per_sec": 0, 00:22:05.672 "rw_mbytes_per_sec": 0, 00:22:05.672 "r_mbytes_per_sec": 0, 00:22:05.672 "w_mbytes_per_sec": 0 00:22:05.672 }, 00:22:05.672 "claimed": true, 00:22:05.672 "claim_type": "exclusive_write", 00:22:05.672 "zoned": false, 00:22:05.672 "supported_io_types": { 00:22:05.672 "read": true, 00:22:05.672 "write": true, 00:22:05.672 "unmap": true, 00:22:05.672 "flush": true, 00:22:05.672 "reset": true, 00:22:05.672 "nvme_admin": false, 00:22:05.672 "nvme_io": false, 00:22:05.672 "nvme_io_md": false, 00:22:05.672 "write_zeroes": true, 00:22:05.672 "zcopy": true, 00:22:05.672 "get_zone_info": false, 00:22:05.672 "zone_management": false, 00:22:05.672 "zone_append": false, 00:22:05.672 "compare": false, 00:22:05.672 "compare_and_write": false, 00:22:05.672 "abort": true, 00:22:05.672 "seek_hole": false, 00:22:05.672 "seek_data": false, 00:22:05.672 "copy": true, 00:22:05.672 "nvme_iov_md": false 00:22:05.672 }, 00:22:05.672 "memory_domains": [ 00:22:05.672 { 00:22:05.672 "dma_device_id": "system", 00:22:05.672 "dma_device_type": 1 00:22:05.672 }, 00:22:05.672 { 00:22:05.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.672 "dma_device_type": 2 00:22:05.672 } 00:22:05.672 ], 00:22:05.672 "driver_specific": {} 00:22:05.672 } 00:22:05.672 ] 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.672 "name": "Existed_Raid", 00:22:05.672 "uuid": "20899375-2bc1-48a2-8a34-c008355c5a16", 00:22:05.672 "strip_size_kb": 64, 00:22:05.672 "state": "configuring", 00:22:05.672 "raid_level": "raid5f", 00:22:05.672 "superblock": true, 00:22:05.672 "num_base_bdevs": 3, 00:22:05.672 "num_base_bdevs_discovered": 1, 00:22:05.672 "num_base_bdevs_operational": 3, 00:22:05.672 "base_bdevs_list": [ 00:22:05.672 { 00:22:05.672 "name": "BaseBdev1", 00:22:05.672 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:05.672 "is_configured": true, 00:22:05.672 "data_offset": 2048, 00:22:05.672 "data_size": 63488 00:22:05.672 }, 00:22:05.672 { 00:22:05.672 "name": "BaseBdev2", 00:22:05.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.672 "is_configured": false, 00:22:05.672 "data_offset": 0, 00:22:05.672 "data_size": 0 00:22:05.672 }, 00:22:05.672 { 00:22:05.672 "name": "BaseBdev3", 00:22:05.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.672 "is_configured": false, 00:22:05.672 "data_offset": 0, 00:22:05.672 "data_size": 0 00:22:05.672 } 00:22:05.672 ] 00:22:05.672 }' 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.672 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.239 [2024-12-06 13:16:12.512754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.239 [2024-12-06 13:16:12.512817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.239 [2024-12-06 13:16:12.520819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.239 [2024-12-06 13:16:12.523338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.239 [2024-12-06 13:16:12.523396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.239 [2024-12-06 13:16:12.523413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.239 [2024-12-06 13:16:12.523429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.239 "name": "Existed_Raid", 00:22:06.239 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:06.239 "strip_size_kb": 64, 00:22:06.239 "state": "configuring", 00:22:06.239 "raid_level": "raid5f", 00:22:06.239 "superblock": true, 00:22:06.239 "num_base_bdevs": 3, 00:22:06.239 "num_base_bdevs_discovered": 1, 00:22:06.239 "num_base_bdevs_operational": 3, 00:22:06.239 "base_bdevs_list": [ 00:22:06.239 { 00:22:06.239 "name": "BaseBdev1", 00:22:06.239 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:06.239 "is_configured": true, 00:22:06.239 "data_offset": 2048, 00:22:06.239 "data_size": 63488 00:22:06.239 }, 00:22:06.239 { 00:22:06.239 "name": "BaseBdev2", 00:22:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.239 "is_configured": false, 00:22:06.239 "data_offset": 0, 00:22:06.239 "data_size": 0 00:22:06.239 }, 00:22:06.239 { 00:22:06.239 "name": "BaseBdev3", 00:22:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.239 "is_configured": false, 00:22:06.239 "data_offset": 0, 00:22:06.239 "data_size": 0 00:22:06.239 } 00:22:06.239 ] 00:22:06.239 }' 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.239 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.806 [2024-12-06 13:16:13.103798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.806 BaseBdev2 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.806 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.806 [ 00:22:06.806 { 00:22:06.806 "name": "BaseBdev2", 00:22:06.806 "aliases": [ 00:22:06.806 "09508207-83c7-458b-93ad-bf07ad13aed7" 00:22:06.806 ], 00:22:06.806 "product_name": "Malloc disk", 00:22:06.806 "block_size": 512, 00:22:06.806 "num_blocks": 65536, 00:22:06.806 "uuid": "09508207-83c7-458b-93ad-bf07ad13aed7", 00:22:06.806 "assigned_rate_limits": { 00:22:06.806 "rw_ios_per_sec": 0, 00:22:06.806 "rw_mbytes_per_sec": 0, 00:22:06.806 "r_mbytes_per_sec": 0, 00:22:06.806 "w_mbytes_per_sec": 0 00:22:06.806 }, 00:22:06.806 "claimed": true, 00:22:06.806 "claim_type": "exclusive_write", 00:22:06.806 "zoned": false, 00:22:06.806 "supported_io_types": { 00:22:06.806 "read": true, 00:22:06.806 "write": true, 00:22:06.806 "unmap": true, 00:22:06.806 "flush": true, 00:22:06.806 "reset": true, 00:22:06.806 "nvme_admin": false, 00:22:06.807 "nvme_io": false, 00:22:06.807 "nvme_io_md": false, 00:22:06.807 "write_zeroes": true, 00:22:06.807 "zcopy": true, 00:22:06.807 "get_zone_info": false, 00:22:06.807 "zone_management": false, 00:22:06.807 "zone_append": false, 00:22:06.807 "compare": false, 00:22:06.807 "compare_and_write": false, 00:22:06.807 "abort": true, 00:22:06.807 "seek_hole": false, 00:22:06.807 "seek_data": false, 00:22:06.807 "copy": true, 00:22:06.807 "nvme_iov_md": false 00:22:06.807 }, 00:22:06.807 "memory_domains": [ 00:22:06.807 { 00:22:06.807 "dma_device_id": "system", 00:22:06.807 "dma_device_type": 1 00:22:06.807 }, 00:22:06.807 { 00:22:06.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.807 "dma_device_type": 2 00:22:06.807 } 00:22:06.807 ], 00:22:06.807 "driver_specific": {} 00:22:06.807 } 00:22:06.807 ] 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.807 "name": "Existed_Raid", 00:22:06.807 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:06.807 "strip_size_kb": 64, 00:22:06.807 "state": "configuring", 00:22:06.807 "raid_level": "raid5f", 00:22:06.807 "superblock": true, 00:22:06.807 "num_base_bdevs": 3, 00:22:06.807 "num_base_bdevs_discovered": 2, 00:22:06.807 "num_base_bdevs_operational": 3, 00:22:06.807 "base_bdevs_list": [ 00:22:06.807 { 00:22:06.807 "name": "BaseBdev1", 00:22:06.807 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:06.807 "is_configured": true, 00:22:06.807 "data_offset": 2048, 00:22:06.807 "data_size": 63488 00:22:06.807 }, 00:22:06.807 { 00:22:06.807 "name": "BaseBdev2", 00:22:06.807 "uuid": "09508207-83c7-458b-93ad-bf07ad13aed7", 00:22:06.807 "is_configured": true, 00:22:06.807 "data_offset": 2048, 00:22:06.807 "data_size": 63488 00:22:06.807 }, 00:22:06.807 { 00:22:06.807 "name": "BaseBdev3", 00:22:06.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.807 "is_configured": false, 00:22:06.807 "data_offset": 0, 00:22:06.807 "data_size": 0 00:22:06.807 } 00:22:06.807 ] 00:22:06.807 }' 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.807 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.374 [2024-12-06 13:16:13.692178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.374 [2024-12-06 13:16:13.692532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:07.374 [2024-12-06 13:16:13.692562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:07.374 [2024-12-06 13:16:13.692891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:07.374 BaseBdev3 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.374 [2024-12-06 13:16:13.698245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:07.374 [2024-12-06 13:16:13.698411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:07.374 [2024-12-06 13:16:13.698771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.374 [ 00:22:07.374 { 00:22:07.374 "name": "BaseBdev3", 00:22:07.374 "aliases": [ 00:22:07.374 "583a63f6-7631-41ac-81f6-8cd452f04d99" 00:22:07.374 ], 00:22:07.374 "product_name": "Malloc disk", 00:22:07.374 "block_size": 512, 00:22:07.374 "num_blocks": 65536, 00:22:07.374 "uuid": "583a63f6-7631-41ac-81f6-8cd452f04d99", 00:22:07.374 "assigned_rate_limits": { 00:22:07.374 "rw_ios_per_sec": 0, 00:22:07.374 "rw_mbytes_per_sec": 0, 00:22:07.374 "r_mbytes_per_sec": 0, 00:22:07.374 "w_mbytes_per_sec": 0 00:22:07.374 }, 00:22:07.374 "claimed": true, 00:22:07.374 "claim_type": "exclusive_write", 00:22:07.374 "zoned": false, 00:22:07.374 "supported_io_types": { 00:22:07.374 "read": true, 00:22:07.374 "write": true, 00:22:07.374 "unmap": true, 00:22:07.374 "flush": true, 00:22:07.374 "reset": true, 00:22:07.374 "nvme_admin": false, 00:22:07.374 "nvme_io": false, 00:22:07.374 "nvme_io_md": false, 00:22:07.374 "write_zeroes": true, 00:22:07.374 "zcopy": true, 00:22:07.374 "get_zone_info": false, 00:22:07.374 "zone_management": false, 00:22:07.374 "zone_append": false, 00:22:07.374 "compare": false, 00:22:07.374 "compare_and_write": false, 00:22:07.374 "abort": true, 00:22:07.374 "seek_hole": false, 00:22:07.374 "seek_data": false, 00:22:07.374 "copy": true, 00:22:07.374 "nvme_iov_md": false 00:22:07.374 }, 00:22:07.374 "memory_domains": [ 00:22:07.374 { 00:22:07.374 "dma_device_id": "system", 00:22:07.374 "dma_device_type": 1 00:22:07.374 }, 00:22:07.374 { 00:22:07.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.374 "dma_device_type": 2 00:22:07.374 } 00:22:07.374 ], 00:22:07.374 "driver_specific": {} 00:22:07.374 } 00:22:07.374 ] 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.374 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.375 "name": "Existed_Raid", 00:22:07.375 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:07.375 "strip_size_kb": 64, 00:22:07.375 "state": "online", 00:22:07.375 "raid_level": "raid5f", 00:22:07.375 "superblock": true, 00:22:07.375 "num_base_bdevs": 3, 00:22:07.375 "num_base_bdevs_discovered": 3, 00:22:07.375 "num_base_bdevs_operational": 3, 00:22:07.375 "base_bdevs_list": [ 00:22:07.375 { 00:22:07.375 "name": "BaseBdev1", 00:22:07.375 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:07.375 "is_configured": true, 00:22:07.375 "data_offset": 2048, 00:22:07.375 "data_size": 63488 00:22:07.375 }, 00:22:07.375 { 00:22:07.375 "name": "BaseBdev2", 00:22:07.375 "uuid": "09508207-83c7-458b-93ad-bf07ad13aed7", 00:22:07.375 "is_configured": true, 00:22:07.375 "data_offset": 2048, 00:22:07.375 "data_size": 63488 00:22:07.375 }, 00:22:07.375 { 00:22:07.375 "name": "BaseBdev3", 00:22:07.375 "uuid": "583a63f6-7631-41ac-81f6-8cd452f04d99", 00:22:07.375 "is_configured": true, 00:22:07.375 "data_offset": 2048, 00:22:07.375 "data_size": 63488 00:22:07.375 } 00:22:07.375 ] 00:22:07.375 }' 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.375 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:07.942 [2024-12-06 13:16:14.304793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:07.942 "name": "Existed_Raid", 00:22:07.942 "aliases": [ 00:22:07.942 "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4" 00:22:07.942 ], 00:22:07.942 "product_name": "Raid Volume", 00:22:07.942 "block_size": 512, 00:22:07.942 "num_blocks": 126976, 00:22:07.942 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:07.942 "assigned_rate_limits": { 00:22:07.942 "rw_ios_per_sec": 0, 00:22:07.942 "rw_mbytes_per_sec": 0, 00:22:07.942 "r_mbytes_per_sec": 0, 00:22:07.942 "w_mbytes_per_sec": 0 00:22:07.942 }, 00:22:07.942 "claimed": false, 00:22:07.942 "zoned": false, 00:22:07.942 "supported_io_types": { 00:22:07.942 "read": true, 00:22:07.942 "write": true, 00:22:07.942 "unmap": false, 00:22:07.942 "flush": false, 00:22:07.942 "reset": true, 00:22:07.942 "nvme_admin": false, 00:22:07.942 "nvme_io": false, 00:22:07.942 "nvme_io_md": false, 00:22:07.942 "write_zeroes": true, 00:22:07.942 "zcopy": false, 00:22:07.942 "get_zone_info": false, 00:22:07.942 "zone_management": false, 00:22:07.942 "zone_append": false, 00:22:07.942 "compare": false, 00:22:07.942 "compare_and_write": false, 00:22:07.942 "abort": false, 00:22:07.942 "seek_hole": false, 00:22:07.942 "seek_data": false, 00:22:07.942 "copy": false, 00:22:07.942 "nvme_iov_md": false 00:22:07.942 }, 00:22:07.942 "driver_specific": { 00:22:07.942 "raid": { 00:22:07.942 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:07.942 "strip_size_kb": 64, 00:22:07.942 "state": "online", 00:22:07.942 "raid_level": "raid5f", 00:22:07.942 "superblock": true, 00:22:07.942 "num_base_bdevs": 3, 00:22:07.942 "num_base_bdevs_discovered": 3, 00:22:07.942 "num_base_bdevs_operational": 3, 00:22:07.942 "base_bdevs_list": [ 00:22:07.942 { 00:22:07.942 "name": "BaseBdev1", 00:22:07.942 "uuid": "48424e96-433d-43ab-9c4d-52a519197039", 00:22:07.942 "is_configured": true, 00:22:07.942 "data_offset": 2048, 00:22:07.942 "data_size": 63488 00:22:07.942 }, 00:22:07.942 { 00:22:07.942 "name": "BaseBdev2", 00:22:07.942 "uuid": "09508207-83c7-458b-93ad-bf07ad13aed7", 00:22:07.942 "is_configured": true, 00:22:07.942 "data_offset": 2048, 00:22:07.942 "data_size": 63488 00:22:07.942 }, 00:22:07.942 { 00:22:07.942 "name": "BaseBdev3", 00:22:07.942 "uuid": "583a63f6-7631-41ac-81f6-8cd452f04d99", 00:22:07.942 "is_configured": true, 00:22:07.942 "data_offset": 2048, 00:22:07.942 "data_size": 63488 00:22:07.942 } 00:22:07.942 ] 00:22:07.942 } 00:22:07.942 } 00:22:07.942 }' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:07.942 BaseBdev2 00:22:07.942 BaseBdev3' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:07.942 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 [2024-12-06 13:16:14.564621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.202 "name": "Existed_Raid", 00:22:08.202 "uuid": "f522d07e-1b6d-4c03-8ac2-df6d3649e4f4", 00:22:08.202 "strip_size_kb": 64, 00:22:08.202 "state": "online", 00:22:08.202 "raid_level": "raid5f", 00:22:08.202 "superblock": true, 00:22:08.202 "num_base_bdevs": 3, 00:22:08.202 "num_base_bdevs_discovered": 2, 00:22:08.202 "num_base_bdevs_operational": 2, 00:22:08.202 "base_bdevs_list": [ 00:22:08.202 { 00:22:08.202 "name": null, 00:22:08.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.202 "is_configured": false, 00:22:08.202 "data_offset": 0, 00:22:08.202 "data_size": 63488 00:22:08.202 }, 00:22:08.202 { 00:22:08.202 "name": "BaseBdev2", 00:22:08.202 "uuid": "09508207-83c7-458b-93ad-bf07ad13aed7", 00:22:08.202 "is_configured": true, 00:22:08.202 "data_offset": 2048, 00:22:08.202 "data_size": 63488 00:22:08.202 }, 00:22:08.202 { 00:22:08.202 "name": "BaseBdev3", 00:22:08.202 "uuid": "583a63f6-7631-41ac-81f6-8cd452f04d99", 00:22:08.202 "is_configured": true, 00:22:08.202 "data_offset": 2048, 00:22:08.202 "data_size": 63488 00:22:08.202 } 00:22:08.202 ] 00:22:08.202 }' 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.202 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 [2024-12-06 13:16:15.172257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.791 [2024-12-06 13:16:15.172476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.791 [2024-12-06 13:16:15.258559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.791 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.791 [2024-12-06 13:16:15.310620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.791 [2024-12-06 13:16:15.310682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 BaseBdev2 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 [ 00:22:09.061 { 00:22:09.061 "name": "BaseBdev2", 00:22:09.061 "aliases": [ 00:22:09.061 "71ec4b30-6f32-4fe5-8427-cf64707d3cb6" 00:22:09.061 ], 00:22:09.061 "product_name": "Malloc disk", 00:22:09.061 "block_size": 512, 00:22:09.061 "num_blocks": 65536, 00:22:09.061 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:09.061 "assigned_rate_limits": { 00:22:09.061 "rw_ios_per_sec": 0, 00:22:09.061 "rw_mbytes_per_sec": 0, 00:22:09.061 "r_mbytes_per_sec": 0, 00:22:09.061 "w_mbytes_per_sec": 0 00:22:09.061 }, 00:22:09.061 "claimed": false, 00:22:09.061 "zoned": false, 00:22:09.061 "supported_io_types": { 00:22:09.061 "read": true, 00:22:09.061 "write": true, 00:22:09.061 "unmap": true, 00:22:09.061 "flush": true, 00:22:09.061 "reset": true, 00:22:09.061 "nvme_admin": false, 00:22:09.061 "nvme_io": false, 00:22:09.061 "nvme_io_md": false, 00:22:09.061 "write_zeroes": true, 00:22:09.061 "zcopy": true, 00:22:09.061 "get_zone_info": false, 00:22:09.061 "zone_management": false, 00:22:09.061 "zone_append": false, 00:22:09.061 "compare": false, 00:22:09.061 "compare_and_write": false, 00:22:09.061 "abort": true, 00:22:09.061 "seek_hole": false, 00:22:09.061 "seek_data": false, 00:22:09.061 "copy": true, 00:22:09.061 "nvme_iov_md": false 00:22:09.061 }, 00:22:09.061 "memory_domains": [ 00:22:09.061 { 00:22:09.061 "dma_device_id": "system", 00:22:09.061 "dma_device_type": 1 00:22:09.061 }, 00:22:09.061 { 00:22:09.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.061 "dma_device_type": 2 00:22:09.061 } 00:22:09.061 ], 00:22:09.061 "driver_specific": {} 00:22:09.061 } 00:22:09.061 ] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 BaseBdev3 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.061 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.344 [ 00:22:09.344 { 00:22:09.344 "name": "BaseBdev3", 00:22:09.344 "aliases": [ 00:22:09.344 "7832e156-1b9a-4780-a8a8-674cbd983455" 00:22:09.344 ], 00:22:09.344 "product_name": "Malloc disk", 00:22:09.344 "block_size": 512, 00:22:09.344 "num_blocks": 65536, 00:22:09.344 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:09.344 "assigned_rate_limits": { 00:22:09.344 "rw_ios_per_sec": 0, 00:22:09.344 "rw_mbytes_per_sec": 0, 00:22:09.344 "r_mbytes_per_sec": 0, 00:22:09.344 "w_mbytes_per_sec": 0 00:22:09.344 }, 00:22:09.344 "claimed": false, 00:22:09.344 "zoned": false, 00:22:09.344 "supported_io_types": { 00:22:09.344 "read": true, 00:22:09.344 "write": true, 00:22:09.344 "unmap": true, 00:22:09.344 "flush": true, 00:22:09.344 "reset": true, 00:22:09.344 "nvme_admin": false, 00:22:09.344 "nvme_io": false, 00:22:09.344 "nvme_io_md": false, 00:22:09.344 "write_zeroes": true, 00:22:09.344 "zcopy": true, 00:22:09.344 "get_zone_info": false, 00:22:09.344 "zone_management": false, 00:22:09.344 "zone_append": false, 00:22:09.344 "compare": false, 00:22:09.344 "compare_and_write": false, 00:22:09.344 "abort": true, 00:22:09.344 "seek_hole": false, 00:22:09.344 "seek_data": false, 00:22:09.344 "copy": true, 00:22:09.344 "nvme_iov_md": false 00:22:09.344 }, 00:22:09.344 "memory_domains": [ 00:22:09.344 { 00:22:09.344 "dma_device_id": "system", 00:22:09.344 "dma_device_type": 1 00:22:09.344 }, 00:22:09.344 { 00:22:09.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.344 "dma_device_type": 2 00:22:09.344 } 00:22:09.344 ], 00:22:09.344 "driver_specific": {} 00:22:09.344 } 00:22:09.344 ] 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.344 [2024-12-06 13:16:15.617125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.344 [2024-12-06 13:16:15.617310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.344 [2024-12-06 13:16:15.617363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.344 [2024-12-06 13:16:15.619898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.344 "name": "Existed_Raid", 00:22:09.344 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:09.344 "strip_size_kb": 64, 00:22:09.344 "state": "configuring", 00:22:09.344 "raid_level": "raid5f", 00:22:09.344 "superblock": true, 00:22:09.344 "num_base_bdevs": 3, 00:22:09.344 "num_base_bdevs_discovered": 2, 00:22:09.344 "num_base_bdevs_operational": 3, 00:22:09.344 "base_bdevs_list": [ 00:22:09.344 { 00:22:09.344 "name": "BaseBdev1", 00:22:09.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.344 "is_configured": false, 00:22:09.344 "data_offset": 0, 00:22:09.344 "data_size": 0 00:22:09.344 }, 00:22:09.344 { 00:22:09.344 "name": "BaseBdev2", 00:22:09.344 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:09.344 "is_configured": true, 00:22:09.344 "data_offset": 2048, 00:22:09.344 "data_size": 63488 00:22:09.344 }, 00:22:09.344 { 00:22:09.344 "name": "BaseBdev3", 00:22:09.344 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:09.344 "is_configured": true, 00:22:09.344 "data_offset": 2048, 00:22:09.344 "data_size": 63488 00:22:09.344 } 00:22:09.344 ] 00:22:09.344 }' 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.344 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 [2024-12-06 13:16:16.113235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.603 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.861 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.861 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.861 "name": "Existed_Raid", 00:22:09.861 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:09.861 "strip_size_kb": 64, 00:22:09.861 "state": "configuring", 00:22:09.861 "raid_level": "raid5f", 00:22:09.861 "superblock": true, 00:22:09.861 "num_base_bdevs": 3, 00:22:09.861 "num_base_bdevs_discovered": 1, 00:22:09.861 "num_base_bdevs_operational": 3, 00:22:09.861 "base_bdevs_list": [ 00:22:09.861 { 00:22:09.861 "name": "BaseBdev1", 00:22:09.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.861 "is_configured": false, 00:22:09.861 "data_offset": 0, 00:22:09.861 "data_size": 0 00:22:09.861 }, 00:22:09.861 { 00:22:09.861 "name": null, 00:22:09.861 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:09.861 "is_configured": false, 00:22:09.861 "data_offset": 0, 00:22:09.861 "data_size": 63488 00:22:09.861 }, 00:22:09.861 { 00:22:09.861 "name": "BaseBdev3", 00:22:09.861 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:09.861 "is_configured": true, 00:22:09.861 "data_offset": 2048, 00:22:09.861 "data_size": 63488 00:22:09.861 } 00:22:09.861 ] 00:22:09.861 }' 00:22:09.861 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.861 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.120 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.120 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:10.120 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.120 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.120 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.379 [2024-12-06 13:16:16.687419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.379 BaseBdev1 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.379 [ 00:22:10.379 { 00:22:10.379 "name": "BaseBdev1", 00:22:10.379 "aliases": [ 00:22:10.379 "557a16eb-7739-4230-be4f-f758d6e5e287" 00:22:10.379 ], 00:22:10.379 "product_name": "Malloc disk", 00:22:10.379 "block_size": 512, 00:22:10.379 "num_blocks": 65536, 00:22:10.379 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:10.379 "assigned_rate_limits": { 00:22:10.379 "rw_ios_per_sec": 0, 00:22:10.379 "rw_mbytes_per_sec": 0, 00:22:10.379 "r_mbytes_per_sec": 0, 00:22:10.379 "w_mbytes_per_sec": 0 00:22:10.379 }, 00:22:10.379 "claimed": true, 00:22:10.379 "claim_type": "exclusive_write", 00:22:10.379 "zoned": false, 00:22:10.379 "supported_io_types": { 00:22:10.379 "read": true, 00:22:10.379 "write": true, 00:22:10.379 "unmap": true, 00:22:10.379 "flush": true, 00:22:10.379 "reset": true, 00:22:10.379 "nvme_admin": false, 00:22:10.379 "nvme_io": false, 00:22:10.379 "nvme_io_md": false, 00:22:10.379 "write_zeroes": true, 00:22:10.379 "zcopy": true, 00:22:10.379 "get_zone_info": false, 00:22:10.379 "zone_management": false, 00:22:10.379 "zone_append": false, 00:22:10.379 "compare": false, 00:22:10.379 "compare_and_write": false, 00:22:10.379 "abort": true, 00:22:10.379 "seek_hole": false, 00:22:10.379 "seek_data": false, 00:22:10.379 "copy": true, 00:22:10.379 "nvme_iov_md": false 00:22:10.379 }, 00:22:10.379 "memory_domains": [ 00:22:10.379 { 00:22:10.379 "dma_device_id": "system", 00:22:10.379 "dma_device_type": 1 00:22:10.379 }, 00:22:10.379 { 00:22:10.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.379 "dma_device_type": 2 00:22:10.379 } 00:22:10.379 ], 00:22:10.379 "driver_specific": {} 00:22:10.379 } 00:22:10.379 ] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.379 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.380 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.380 "name": "Existed_Raid", 00:22:10.380 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:10.380 "strip_size_kb": 64, 00:22:10.380 "state": "configuring", 00:22:10.380 "raid_level": "raid5f", 00:22:10.380 "superblock": true, 00:22:10.380 "num_base_bdevs": 3, 00:22:10.380 "num_base_bdevs_discovered": 2, 00:22:10.380 "num_base_bdevs_operational": 3, 00:22:10.380 "base_bdevs_list": [ 00:22:10.380 { 00:22:10.380 "name": "BaseBdev1", 00:22:10.380 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:10.380 "is_configured": true, 00:22:10.380 "data_offset": 2048, 00:22:10.380 "data_size": 63488 00:22:10.380 }, 00:22:10.380 { 00:22:10.380 "name": null, 00:22:10.380 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:10.380 "is_configured": false, 00:22:10.380 "data_offset": 0, 00:22:10.380 "data_size": 63488 00:22:10.380 }, 00:22:10.380 { 00:22:10.380 "name": "BaseBdev3", 00:22:10.380 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:10.380 "is_configured": true, 00:22:10.380 "data_offset": 2048, 00:22:10.380 "data_size": 63488 00:22:10.380 } 00:22:10.380 ] 00:22:10.380 }' 00:22:10.380 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.380 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.946 [2024-12-06 13:16:17.283661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.946 "name": "Existed_Raid", 00:22:10.946 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:10.946 "strip_size_kb": 64, 00:22:10.946 "state": "configuring", 00:22:10.946 "raid_level": "raid5f", 00:22:10.946 "superblock": true, 00:22:10.946 "num_base_bdevs": 3, 00:22:10.946 "num_base_bdevs_discovered": 1, 00:22:10.946 "num_base_bdevs_operational": 3, 00:22:10.946 "base_bdevs_list": [ 00:22:10.946 { 00:22:10.946 "name": "BaseBdev1", 00:22:10.946 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:10.946 "is_configured": true, 00:22:10.946 "data_offset": 2048, 00:22:10.946 "data_size": 63488 00:22:10.946 }, 00:22:10.946 { 00:22:10.946 "name": null, 00:22:10.946 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:10.946 "is_configured": false, 00:22:10.946 "data_offset": 0, 00:22:10.946 "data_size": 63488 00:22:10.946 }, 00:22:10.946 { 00:22:10.946 "name": null, 00:22:10.946 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:10.946 "is_configured": false, 00:22:10.946 "data_offset": 0, 00:22:10.946 "data_size": 63488 00:22:10.946 } 00:22:10.946 ] 00:22:10.946 }' 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.946 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.513 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.513 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.514 [2024-12-06 13:16:17.859932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.514 "name": "Existed_Raid", 00:22:11.514 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:11.514 "strip_size_kb": 64, 00:22:11.514 "state": "configuring", 00:22:11.514 "raid_level": "raid5f", 00:22:11.514 "superblock": true, 00:22:11.514 "num_base_bdevs": 3, 00:22:11.514 "num_base_bdevs_discovered": 2, 00:22:11.514 "num_base_bdevs_operational": 3, 00:22:11.514 "base_bdevs_list": [ 00:22:11.514 { 00:22:11.514 "name": "BaseBdev1", 00:22:11.514 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:11.514 "is_configured": true, 00:22:11.514 "data_offset": 2048, 00:22:11.514 "data_size": 63488 00:22:11.514 }, 00:22:11.514 { 00:22:11.514 "name": null, 00:22:11.514 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:11.514 "is_configured": false, 00:22:11.514 "data_offset": 0, 00:22:11.514 "data_size": 63488 00:22:11.514 }, 00:22:11.514 { 00:22:11.514 "name": "BaseBdev3", 00:22:11.514 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:11.514 "is_configured": true, 00:22:11.514 "data_offset": 2048, 00:22:11.514 "data_size": 63488 00:22:11.514 } 00:22:11.514 ] 00:22:11.514 }' 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.514 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.080 [2024-12-06 13:16:18.440121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.080 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.081 "name": "Existed_Raid", 00:22:12.081 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:12.081 "strip_size_kb": 64, 00:22:12.081 "state": "configuring", 00:22:12.081 "raid_level": "raid5f", 00:22:12.081 "superblock": true, 00:22:12.081 "num_base_bdevs": 3, 00:22:12.081 "num_base_bdevs_discovered": 1, 00:22:12.081 "num_base_bdevs_operational": 3, 00:22:12.081 "base_bdevs_list": [ 00:22:12.081 { 00:22:12.081 "name": null, 00:22:12.081 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:12.081 "is_configured": false, 00:22:12.081 "data_offset": 0, 00:22:12.081 "data_size": 63488 00:22:12.081 }, 00:22:12.081 { 00:22:12.081 "name": null, 00:22:12.081 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:12.081 "is_configured": false, 00:22:12.081 "data_offset": 0, 00:22:12.081 "data_size": 63488 00:22:12.081 }, 00:22:12.081 { 00:22:12.081 "name": "BaseBdev3", 00:22:12.081 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:12.081 "is_configured": true, 00:22:12.081 "data_offset": 2048, 00:22:12.081 "data_size": 63488 00:22:12.081 } 00:22:12.081 ] 00:22:12.081 }' 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.081 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.645 [2024-12-06 13:16:19.123341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.645 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.902 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.902 "name": "Existed_Raid", 00:22:12.902 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:12.902 "strip_size_kb": 64, 00:22:12.902 "state": "configuring", 00:22:12.902 "raid_level": "raid5f", 00:22:12.902 "superblock": true, 00:22:12.902 "num_base_bdevs": 3, 00:22:12.902 "num_base_bdevs_discovered": 2, 00:22:12.902 "num_base_bdevs_operational": 3, 00:22:12.902 "base_bdevs_list": [ 00:22:12.902 { 00:22:12.902 "name": null, 00:22:12.902 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:12.902 "is_configured": false, 00:22:12.902 "data_offset": 0, 00:22:12.902 "data_size": 63488 00:22:12.902 }, 00:22:12.902 { 00:22:12.902 "name": "BaseBdev2", 00:22:12.902 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:12.902 "is_configured": true, 00:22:12.902 "data_offset": 2048, 00:22:12.902 "data_size": 63488 00:22:12.902 }, 00:22:12.902 { 00:22:12.902 "name": "BaseBdev3", 00:22:12.902 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:12.902 "is_configured": true, 00:22:12.902 "data_offset": 2048, 00:22:12.902 "data_size": 63488 00:22:12.902 } 00:22:12.902 ] 00:22:12.902 }' 00:22:12.902 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.902 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 557a16eb-7739-4230-be4f-f758d6e5e287 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 [2024-12-06 13:16:19.855053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.467 NewBaseBdev 00:22:13.467 [2024-12-06 13:16:19.855576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:13.467 [2024-12-06 13:16:19.855608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:13.467 [2024-12-06 13:16:19.855941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 [2024-12-06 13:16:19.861050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:13.467 [2024-12-06 13:16:19.861076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:13.467 [2024-12-06 13:16:19.861414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.467 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.467 [ 00:22:13.467 { 00:22:13.467 "name": "NewBaseBdev", 00:22:13.467 "aliases": [ 00:22:13.467 "557a16eb-7739-4230-be4f-f758d6e5e287" 00:22:13.467 ], 00:22:13.467 "product_name": "Malloc disk", 00:22:13.467 "block_size": 512, 00:22:13.467 "num_blocks": 65536, 00:22:13.467 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:13.467 "assigned_rate_limits": { 00:22:13.467 "rw_ios_per_sec": 0, 00:22:13.467 "rw_mbytes_per_sec": 0, 00:22:13.467 "r_mbytes_per_sec": 0, 00:22:13.467 "w_mbytes_per_sec": 0 00:22:13.467 }, 00:22:13.467 "claimed": true, 00:22:13.467 "claim_type": "exclusive_write", 00:22:13.467 "zoned": false, 00:22:13.467 "supported_io_types": { 00:22:13.467 "read": true, 00:22:13.467 "write": true, 00:22:13.467 "unmap": true, 00:22:13.467 "flush": true, 00:22:13.467 "reset": true, 00:22:13.467 "nvme_admin": false, 00:22:13.467 "nvme_io": false, 00:22:13.467 "nvme_io_md": false, 00:22:13.467 "write_zeroes": true, 00:22:13.467 "zcopy": true, 00:22:13.467 "get_zone_info": false, 00:22:13.467 "zone_management": false, 00:22:13.467 "zone_append": false, 00:22:13.467 "compare": false, 00:22:13.467 "compare_and_write": false, 00:22:13.467 "abort": true, 00:22:13.467 "seek_hole": false, 00:22:13.467 "seek_data": false, 00:22:13.467 "copy": true, 00:22:13.467 "nvme_iov_md": false 00:22:13.467 }, 00:22:13.467 "memory_domains": [ 00:22:13.467 { 00:22:13.467 "dma_device_id": "system", 00:22:13.467 "dma_device_type": 1 00:22:13.467 }, 00:22:13.467 { 00:22:13.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.467 "dma_device_type": 2 00:22:13.467 } 00:22:13.467 ], 00:22:13.467 "driver_specific": {} 00:22:13.467 } 00:22:13.467 ] 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.468 "name": "Existed_Raid", 00:22:13.468 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:13.468 "strip_size_kb": 64, 00:22:13.468 "state": "online", 00:22:13.468 "raid_level": "raid5f", 00:22:13.468 "superblock": true, 00:22:13.468 "num_base_bdevs": 3, 00:22:13.468 "num_base_bdevs_discovered": 3, 00:22:13.468 "num_base_bdevs_operational": 3, 00:22:13.468 "base_bdevs_list": [ 00:22:13.468 { 00:22:13.468 "name": "NewBaseBdev", 00:22:13.468 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:13.468 "is_configured": true, 00:22:13.468 "data_offset": 2048, 00:22:13.468 "data_size": 63488 00:22:13.468 }, 00:22:13.468 { 00:22:13.468 "name": "BaseBdev2", 00:22:13.468 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:13.468 "is_configured": true, 00:22:13.468 "data_offset": 2048, 00:22:13.468 "data_size": 63488 00:22:13.468 }, 00:22:13.468 { 00:22:13.468 "name": "BaseBdev3", 00:22:13.468 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:13.468 "is_configured": true, 00:22:13.468 "data_offset": 2048, 00:22:13.468 "data_size": 63488 00:22:13.468 } 00:22:13.468 ] 00:22:13.468 }' 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.468 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.034 [2024-12-06 13:16:20.408402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.034 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.034 "name": "Existed_Raid", 00:22:14.034 "aliases": [ 00:22:14.034 "071a012f-22aa-44df-ab95-8015b236e853" 00:22:14.034 ], 00:22:14.034 "product_name": "Raid Volume", 00:22:14.034 "block_size": 512, 00:22:14.034 "num_blocks": 126976, 00:22:14.034 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:14.034 "assigned_rate_limits": { 00:22:14.034 "rw_ios_per_sec": 0, 00:22:14.034 "rw_mbytes_per_sec": 0, 00:22:14.034 "r_mbytes_per_sec": 0, 00:22:14.034 "w_mbytes_per_sec": 0 00:22:14.034 }, 00:22:14.034 "claimed": false, 00:22:14.034 "zoned": false, 00:22:14.034 "supported_io_types": { 00:22:14.034 "read": true, 00:22:14.034 "write": true, 00:22:14.034 "unmap": false, 00:22:14.034 "flush": false, 00:22:14.034 "reset": true, 00:22:14.034 "nvme_admin": false, 00:22:14.034 "nvme_io": false, 00:22:14.034 "nvme_io_md": false, 00:22:14.034 "write_zeroes": true, 00:22:14.034 "zcopy": false, 00:22:14.034 "get_zone_info": false, 00:22:14.034 "zone_management": false, 00:22:14.034 "zone_append": false, 00:22:14.034 "compare": false, 00:22:14.034 "compare_and_write": false, 00:22:14.034 "abort": false, 00:22:14.034 "seek_hole": false, 00:22:14.034 "seek_data": false, 00:22:14.034 "copy": false, 00:22:14.034 "nvme_iov_md": false 00:22:14.034 }, 00:22:14.034 "driver_specific": { 00:22:14.034 "raid": { 00:22:14.034 "uuid": "071a012f-22aa-44df-ab95-8015b236e853", 00:22:14.034 "strip_size_kb": 64, 00:22:14.034 "state": "online", 00:22:14.034 "raid_level": "raid5f", 00:22:14.034 "superblock": true, 00:22:14.034 "num_base_bdevs": 3, 00:22:14.034 "num_base_bdevs_discovered": 3, 00:22:14.034 "num_base_bdevs_operational": 3, 00:22:14.034 "base_bdevs_list": [ 00:22:14.034 { 00:22:14.034 "name": "NewBaseBdev", 00:22:14.034 "uuid": "557a16eb-7739-4230-be4f-f758d6e5e287", 00:22:14.034 "is_configured": true, 00:22:14.034 "data_offset": 2048, 00:22:14.034 "data_size": 63488 00:22:14.034 }, 00:22:14.034 { 00:22:14.034 "name": "BaseBdev2", 00:22:14.034 "uuid": "71ec4b30-6f32-4fe5-8427-cf64707d3cb6", 00:22:14.034 "is_configured": true, 00:22:14.034 "data_offset": 2048, 00:22:14.034 "data_size": 63488 00:22:14.034 }, 00:22:14.034 { 00:22:14.034 "name": "BaseBdev3", 00:22:14.034 "uuid": "7832e156-1b9a-4780-a8a8-674cbd983455", 00:22:14.034 "is_configured": true, 00:22:14.034 "data_offset": 2048, 00:22:14.034 "data_size": 63488 00:22:14.034 } 00:22:14.034 ] 00:22:14.034 } 00:22:14.034 } 00:22:14.034 }' 00:22:14.035 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.035 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:14.035 BaseBdev2 00:22:14.035 BaseBdev3' 00:22:14.035 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.035 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.035 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.293 [2024-12-06 13:16:20.728194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.293 [2024-12-06 13:16:20.728227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.293 [2024-12-06 13:16:20.728385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.293 [2024-12-06 13:16:20.728803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.293 [2024-12-06 13:16:20.728835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81207 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81207 ']' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81207 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81207 00:22:14.293 killing process with pid 81207 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81207' 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81207 00:22:14.293 [2024-12-06 13:16:20.769064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.293 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81207 00:22:14.550 [2024-12-06 13:16:21.034861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.924 13:16:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:15.924 00:22:15.924 real 0m11.913s 00:22:15.924 user 0m19.653s 00:22:15.924 sys 0m1.738s 00:22:15.924 13:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.924 ************************************ 00:22:15.924 END TEST raid5f_state_function_test_sb 00:22:15.924 ************************************ 00:22:15.924 13:16:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.924 13:16:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:15.924 13:16:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:15.924 13:16:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.924 13:16:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.924 ************************************ 00:22:15.924 START TEST raid5f_superblock_test 00:22:15.924 ************************************ 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81834 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81834 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81834 ']' 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.924 13:16:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.924 [2024-12-06 13:16:22.258665] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:15.924 [2024-12-06 13:16:22.259074] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81834 ] 00:22:16.183 [2024-12-06 13:16:22.452321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.183 [2024-12-06 13:16:22.638100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.441 [2024-12-06 13:16:22.868363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.441 [2024-12-06 13:16:22.868704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 malloc1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 [2024-12-06 13:16:23.349195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.008 [2024-12-06 13:16:23.349277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.008 [2024-12-06 13:16:23.349316] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:17.008 [2024-12-06 13:16:23.349333] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.008 [2024-12-06 13:16:23.352365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.008 [2024-12-06 13:16:23.352560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.008 pt1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 malloc2 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 [2024-12-06 13:16:23.406192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.008 [2024-12-06 13:16:23.406287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.008 [2024-12-06 13:16:23.406324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:17.008 [2024-12-06 13:16:23.406340] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.008 [2024-12-06 13:16:23.409343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.008 [2024-12-06 13:16:23.409390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.008 pt2 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 malloc3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 [2024-12-06 13:16:23.470830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:17.008 [2024-12-06 13:16:23.470917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.008 [2024-12-06 13:16:23.470951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:17.008 [2024-12-06 13:16:23.470967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.008 [2024-12-06 13:16:23.473816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.008 [2024-12-06 13:16:23.473871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:17.008 pt3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.008 [2024-12-06 13:16:23.482887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.008 [2024-12-06 13:16:23.485302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.008 [2024-12-06 13:16:23.485398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:17.008 [2024-12-06 13:16:23.485665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:17.008 [2024-12-06 13:16:23.485696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:17.008 [2024-12-06 13:16:23.486006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:17.008 [2024-12-06 13:16:23.491305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:17.008 [2024-12-06 13:16:23.491437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:17.008 [2024-12-06 13:16:23.491863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.008 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.009 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.267 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.267 "name": "raid_bdev1", 00:22:17.267 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:17.267 "strip_size_kb": 64, 00:22:17.267 "state": "online", 00:22:17.267 "raid_level": "raid5f", 00:22:17.267 "superblock": true, 00:22:17.267 "num_base_bdevs": 3, 00:22:17.267 "num_base_bdevs_discovered": 3, 00:22:17.267 "num_base_bdevs_operational": 3, 00:22:17.267 "base_bdevs_list": [ 00:22:17.267 { 00:22:17.267 "name": "pt1", 00:22:17.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.267 "is_configured": true, 00:22:17.267 "data_offset": 2048, 00:22:17.267 "data_size": 63488 00:22:17.267 }, 00:22:17.267 { 00:22:17.267 "name": "pt2", 00:22:17.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.267 "is_configured": true, 00:22:17.267 "data_offset": 2048, 00:22:17.267 "data_size": 63488 00:22:17.267 }, 00:22:17.267 { 00:22:17.267 "name": "pt3", 00:22:17.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.267 "is_configured": true, 00:22:17.267 "data_offset": 2048, 00:22:17.267 "data_size": 63488 00:22:17.267 } 00:22:17.267 ] 00:22:17.267 }' 00:22:17.267 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.267 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.554 [2024-12-06 13:16:24.018071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.554 "name": "raid_bdev1", 00:22:17.554 "aliases": [ 00:22:17.554 "08696742-a255-4a5d-b779-0220fa831f05" 00:22:17.554 ], 00:22:17.554 "product_name": "Raid Volume", 00:22:17.554 "block_size": 512, 00:22:17.554 "num_blocks": 126976, 00:22:17.554 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:17.554 "assigned_rate_limits": { 00:22:17.554 "rw_ios_per_sec": 0, 00:22:17.554 "rw_mbytes_per_sec": 0, 00:22:17.554 "r_mbytes_per_sec": 0, 00:22:17.554 "w_mbytes_per_sec": 0 00:22:17.554 }, 00:22:17.554 "claimed": false, 00:22:17.554 "zoned": false, 00:22:17.554 "supported_io_types": { 00:22:17.554 "read": true, 00:22:17.554 "write": true, 00:22:17.554 "unmap": false, 00:22:17.554 "flush": false, 00:22:17.554 "reset": true, 00:22:17.554 "nvme_admin": false, 00:22:17.554 "nvme_io": false, 00:22:17.554 "nvme_io_md": false, 00:22:17.554 "write_zeroes": true, 00:22:17.554 "zcopy": false, 00:22:17.554 "get_zone_info": false, 00:22:17.554 "zone_management": false, 00:22:17.554 "zone_append": false, 00:22:17.554 "compare": false, 00:22:17.554 "compare_and_write": false, 00:22:17.554 "abort": false, 00:22:17.554 "seek_hole": false, 00:22:17.554 "seek_data": false, 00:22:17.554 "copy": false, 00:22:17.554 "nvme_iov_md": false 00:22:17.554 }, 00:22:17.554 "driver_specific": { 00:22:17.554 "raid": { 00:22:17.554 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:17.554 "strip_size_kb": 64, 00:22:17.554 "state": "online", 00:22:17.554 "raid_level": "raid5f", 00:22:17.554 "superblock": true, 00:22:17.554 "num_base_bdevs": 3, 00:22:17.554 "num_base_bdevs_discovered": 3, 00:22:17.554 "num_base_bdevs_operational": 3, 00:22:17.554 "base_bdevs_list": [ 00:22:17.554 { 00:22:17.554 "name": "pt1", 00:22:17.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.554 "is_configured": true, 00:22:17.554 "data_offset": 2048, 00:22:17.554 "data_size": 63488 00:22:17.554 }, 00:22:17.554 { 00:22:17.554 "name": "pt2", 00:22:17.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.554 "is_configured": true, 00:22:17.554 "data_offset": 2048, 00:22:17.554 "data_size": 63488 00:22:17.554 }, 00:22:17.554 { 00:22:17.554 "name": "pt3", 00:22:17.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:17.554 "is_configured": true, 00:22:17.554 "data_offset": 2048, 00:22:17.554 "data_size": 63488 00:22:17.554 } 00:22:17.554 ] 00:22:17.554 } 00:22:17.554 } 00:22:17.554 }' 00:22:17.554 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:17.813 pt2 00:22:17.813 pt3' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.813 [2024-12-06 13:16:24.310036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.813 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=08696742-a255-4a5d-b779-0220fa831f05 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 08696742-a255-4a5d-b779-0220fa831f05 ']' 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.072 [2024-12-06 13:16:24.353818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.072 [2024-12-06 13:16:24.353850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.072 [2024-12-06 13:16:24.353950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.072 [2024-12-06 13:16:24.354056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.072 [2024-12-06 13:16:24.354074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.072 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.073 [2024-12-06 13:16:24.493978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:18.073 [2024-12-06 13:16:24.496500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:18.073 [2024-12-06 13:16:24.496570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:18.073 [2024-12-06 13:16:24.496646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:18.073 [2024-12-06 13:16:24.496741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:18.073 [2024-12-06 13:16:24.496776] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:18.073 [2024-12-06 13:16:24.496804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.073 [2024-12-06 13:16:24.496818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:18.073 request: 00:22:18.073 { 00:22:18.073 "name": "raid_bdev1", 00:22:18.073 "raid_level": "raid5f", 00:22:18.073 "base_bdevs": [ 00:22:18.073 "malloc1", 00:22:18.073 "malloc2", 00:22:18.073 "malloc3" 00:22:18.073 ], 00:22:18.073 "strip_size_kb": 64, 00:22:18.073 "superblock": false, 00:22:18.073 "method": "bdev_raid_create", 00:22:18.073 "req_id": 1 00:22:18.073 } 00:22:18.073 Got JSON-RPC error response 00:22:18.073 response: 00:22:18.073 { 00:22:18.073 "code": -17, 00:22:18.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:18.073 } 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.073 [2024-12-06 13:16:24.581914] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:18.073 [2024-12-06 13:16:24.582129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.073 [2024-12-06 13:16:24.582206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:18.073 [2024-12-06 13:16:24.582402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.073 [2024-12-06 13:16:24.585555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.073 [2024-12-06 13:16:24.585716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:18.073 [2024-12-06 13:16:24.585962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:18.073 [2024-12-06 13:16:24.586143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.073 pt1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.073 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.331 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.331 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.331 "name": "raid_bdev1", 00:22:18.331 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:18.331 "strip_size_kb": 64, 00:22:18.331 "state": "configuring", 00:22:18.331 "raid_level": "raid5f", 00:22:18.331 "superblock": true, 00:22:18.331 "num_base_bdevs": 3, 00:22:18.331 "num_base_bdevs_discovered": 1, 00:22:18.331 "num_base_bdevs_operational": 3, 00:22:18.331 "base_bdevs_list": [ 00:22:18.331 { 00:22:18.331 "name": "pt1", 00:22:18.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.331 "is_configured": true, 00:22:18.331 "data_offset": 2048, 00:22:18.332 "data_size": 63488 00:22:18.332 }, 00:22:18.332 { 00:22:18.332 "name": null, 00:22:18.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.332 "is_configured": false, 00:22:18.332 "data_offset": 2048, 00:22:18.332 "data_size": 63488 00:22:18.332 }, 00:22:18.332 { 00:22:18.332 "name": null, 00:22:18.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.332 "is_configured": false, 00:22:18.332 "data_offset": 2048, 00:22:18.332 "data_size": 63488 00:22:18.332 } 00:22:18.332 ] 00:22:18.332 }' 00:22:18.332 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.332 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.590 [2024-12-06 13:16:25.078207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.590 [2024-12-06 13:16:25.078300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.590 [2024-12-06 13:16:25.078338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:18.590 [2024-12-06 13:16:25.078355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.590 [2024-12-06 13:16:25.078957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.590 [2024-12-06 13:16:25.079160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.590 [2024-12-06 13:16:25.079315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.590 [2024-12-06 13:16:25.079358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.590 pt2 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.590 [2024-12-06 13:16:25.086169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.590 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.848 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.848 "name": "raid_bdev1", 00:22:18.848 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:18.848 "strip_size_kb": 64, 00:22:18.848 "state": "configuring", 00:22:18.848 "raid_level": "raid5f", 00:22:18.848 "superblock": true, 00:22:18.848 "num_base_bdevs": 3, 00:22:18.848 "num_base_bdevs_discovered": 1, 00:22:18.848 "num_base_bdevs_operational": 3, 00:22:18.848 "base_bdevs_list": [ 00:22:18.848 { 00:22:18.848 "name": "pt1", 00:22:18.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.848 "is_configured": true, 00:22:18.848 "data_offset": 2048, 00:22:18.848 "data_size": 63488 00:22:18.848 }, 00:22:18.848 { 00:22:18.848 "name": null, 00:22:18.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.848 "is_configured": false, 00:22:18.848 "data_offset": 0, 00:22:18.848 "data_size": 63488 00:22:18.848 }, 00:22:18.848 { 00:22:18.848 "name": null, 00:22:18.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:18.848 "is_configured": false, 00:22:18.848 "data_offset": 2048, 00:22:18.848 "data_size": 63488 00:22:18.848 } 00:22:18.848 ] 00:22:18.848 }' 00:22:18.848 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.848 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.108 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:19.108 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.108 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:19.108 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.108 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 [2024-12-06 13:16:25.634299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:19.366 [2024-12-06 13:16:25.634390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.366 [2024-12-06 13:16:25.634427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:19.366 [2024-12-06 13:16:25.634461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.366 [2024-12-06 13:16:25.635053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.366 [2024-12-06 13:16:25.635102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:19.366 [2024-12-06 13:16:25.635203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:19.366 [2024-12-06 13:16:25.635241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:19.366 pt2 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 [2024-12-06 13:16:25.642278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:19.366 [2024-12-06 13:16:25.642478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.366 [2024-12-06 13:16:25.642511] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:19.366 [2024-12-06 13:16:25.642529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.366 [2024-12-06 13:16:25.642967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.366 [2024-12-06 13:16:25.643009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:19.366 [2024-12-06 13:16:25.643086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:19.366 [2024-12-06 13:16:25.643119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:19.366 [2024-12-06 13:16:25.643285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:19.366 [2024-12-06 13:16:25.643306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:19.366 [2024-12-06 13:16:25.643652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:19.366 [2024-12-06 13:16:25.648547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:19.366 pt3 00:22:19.366 [2024-12-06 13:16:25.648691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:19.366 [2024-12-06 13:16:25.648932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.366 "name": "raid_bdev1", 00:22:19.366 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:19.366 "strip_size_kb": 64, 00:22:19.366 "state": "online", 00:22:19.366 "raid_level": "raid5f", 00:22:19.366 "superblock": true, 00:22:19.366 "num_base_bdevs": 3, 00:22:19.366 "num_base_bdevs_discovered": 3, 00:22:19.366 "num_base_bdevs_operational": 3, 00:22:19.366 "base_bdevs_list": [ 00:22:19.366 { 00:22:19.366 "name": "pt1", 00:22:19.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.366 "is_configured": true, 00:22:19.366 "data_offset": 2048, 00:22:19.366 "data_size": 63488 00:22:19.366 }, 00:22:19.366 { 00:22:19.366 "name": "pt2", 00:22:19.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.366 "is_configured": true, 00:22:19.366 "data_offset": 2048, 00:22:19.366 "data_size": 63488 00:22:19.366 }, 00:22:19.366 { 00:22:19.366 "name": "pt3", 00:22:19.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.366 "is_configured": true, 00:22:19.366 "data_offset": 2048, 00:22:19.366 "data_size": 63488 00:22:19.366 } 00:22:19.366 ] 00:22:19.366 }' 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.366 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.931 [2024-12-06 13:16:26.183239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.931 "name": "raid_bdev1", 00:22:19.931 "aliases": [ 00:22:19.931 "08696742-a255-4a5d-b779-0220fa831f05" 00:22:19.931 ], 00:22:19.931 "product_name": "Raid Volume", 00:22:19.931 "block_size": 512, 00:22:19.931 "num_blocks": 126976, 00:22:19.931 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:19.931 "assigned_rate_limits": { 00:22:19.931 "rw_ios_per_sec": 0, 00:22:19.931 "rw_mbytes_per_sec": 0, 00:22:19.931 "r_mbytes_per_sec": 0, 00:22:19.931 "w_mbytes_per_sec": 0 00:22:19.931 }, 00:22:19.931 "claimed": false, 00:22:19.931 "zoned": false, 00:22:19.931 "supported_io_types": { 00:22:19.931 "read": true, 00:22:19.931 "write": true, 00:22:19.931 "unmap": false, 00:22:19.931 "flush": false, 00:22:19.931 "reset": true, 00:22:19.931 "nvme_admin": false, 00:22:19.931 "nvme_io": false, 00:22:19.931 "nvme_io_md": false, 00:22:19.931 "write_zeroes": true, 00:22:19.931 "zcopy": false, 00:22:19.931 "get_zone_info": false, 00:22:19.931 "zone_management": false, 00:22:19.931 "zone_append": false, 00:22:19.931 "compare": false, 00:22:19.931 "compare_and_write": false, 00:22:19.931 "abort": false, 00:22:19.931 "seek_hole": false, 00:22:19.931 "seek_data": false, 00:22:19.931 "copy": false, 00:22:19.931 "nvme_iov_md": false 00:22:19.931 }, 00:22:19.931 "driver_specific": { 00:22:19.931 "raid": { 00:22:19.931 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:19.931 "strip_size_kb": 64, 00:22:19.931 "state": "online", 00:22:19.931 "raid_level": "raid5f", 00:22:19.931 "superblock": true, 00:22:19.931 "num_base_bdevs": 3, 00:22:19.931 "num_base_bdevs_discovered": 3, 00:22:19.931 "num_base_bdevs_operational": 3, 00:22:19.931 "base_bdevs_list": [ 00:22:19.931 { 00:22:19.931 "name": "pt1", 00:22:19.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.931 "is_configured": true, 00:22:19.931 "data_offset": 2048, 00:22:19.931 "data_size": 63488 00:22:19.931 }, 00:22:19.931 { 00:22:19.931 "name": "pt2", 00:22:19.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.931 "is_configured": true, 00:22:19.931 "data_offset": 2048, 00:22:19.931 "data_size": 63488 00:22:19.931 }, 00:22:19.931 { 00:22:19.931 "name": "pt3", 00:22:19.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.931 "is_configured": true, 00:22:19.931 "data_offset": 2048, 00:22:19.931 "data_size": 63488 00:22:19.931 } 00:22:19.931 ] 00:22:19.931 } 00:22:19.931 } 00:22:19.931 }' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:19.931 pt2 00:22:19.931 pt3' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.931 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.932 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.190 [2024-12-06 13:16:26.499267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 08696742-a255-4a5d-b779-0220fa831f05 '!=' 08696742-a255-4a5d-b779-0220fa831f05 ']' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.190 [2024-12-06 13:16:26.551100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.190 "name": "raid_bdev1", 00:22:20.190 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:20.190 "strip_size_kb": 64, 00:22:20.190 "state": "online", 00:22:20.190 "raid_level": "raid5f", 00:22:20.190 "superblock": true, 00:22:20.190 "num_base_bdevs": 3, 00:22:20.190 "num_base_bdevs_discovered": 2, 00:22:20.190 "num_base_bdevs_operational": 2, 00:22:20.190 "base_bdevs_list": [ 00:22:20.190 { 00:22:20.190 "name": null, 00:22:20.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.190 "is_configured": false, 00:22:20.190 "data_offset": 0, 00:22:20.190 "data_size": 63488 00:22:20.190 }, 00:22:20.190 { 00:22:20.190 "name": "pt2", 00:22:20.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.190 "is_configured": true, 00:22:20.190 "data_offset": 2048, 00:22:20.190 "data_size": 63488 00:22:20.190 }, 00:22:20.190 { 00:22:20.190 "name": "pt3", 00:22:20.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.190 "is_configured": true, 00:22:20.190 "data_offset": 2048, 00:22:20.190 "data_size": 63488 00:22:20.190 } 00:22:20.190 ] 00:22:20.190 }' 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.190 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.814 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.814 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.814 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.814 [2024-12-06 13:16:27.095212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.814 [2024-12-06 13:16:27.095248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.814 [2024-12-06 13:16:27.095348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.815 [2024-12-06 13:16:27.095432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.815 [2024-12-06 13:16:27.095472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 [2024-12-06 13:16:27.183170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.815 [2024-12-06 13:16:27.183241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.815 [2024-12-06 13:16:27.183267] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:20.815 [2024-12-06 13:16:27.183285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.815 [2024-12-06 13:16:27.186146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.815 [2024-12-06 13:16:27.186343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.815 [2024-12-06 13:16:27.186473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.815 [2024-12-06 13:16:27.186544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.815 pt2 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.815 "name": "raid_bdev1", 00:22:20.815 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:20.815 "strip_size_kb": 64, 00:22:20.815 "state": "configuring", 00:22:20.815 "raid_level": "raid5f", 00:22:20.815 "superblock": true, 00:22:20.815 "num_base_bdevs": 3, 00:22:20.815 "num_base_bdevs_discovered": 1, 00:22:20.815 "num_base_bdevs_operational": 2, 00:22:20.815 "base_bdevs_list": [ 00:22:20.815 { 00:22:20.815 "name": null, 00:22:20.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.815 "is_configured": false, 00:22:20.815 "data_offset": 2048, 00:22:20.815 "data_size": 63488 00:22:20.815 }, 00:22:20.815 { 00:22:20.815 "name": "pt2", 00:22:20.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.815 "is_configured": true, 00:22:20.815 "data_offset": 2048, 00:22:20.815 "data_size": 63488 00:22:20.815 }, 00:22:20.815 { 00:22:20.815 "name": null, 00:22:20.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.815 "is_configured": false, 00:22:20.815 "data_offset": 2048, 00:22:20.815 "data_size": 63488 00:22:20.815 } 00:22:20.815 ] 00:22:20.815 }' 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.815 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.381 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.382 [2024-12-06 13:16:27.691341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:21.382 [2024-12-06 13:16:27.691442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.382 [2024-12-06 13:16:27.691493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:21.382 [2024-12-06 13:16:27.691513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.382 [2024-12-06 13:16:27.692122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.382 [2024-12-06 13:16:27.692151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:21.382 [2024-12-06 13:16:27.692255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:21.382 [2024-12-06 13:16:27.692296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:21.382 [2024-12-06 13:16:27.692461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:21.382 [2024-12-06 13:16:27.692484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:21.382 [2024-12-06 13:16:27.692803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:21.382 [2024-12-06 13:16:27.697781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:21.382 [2024-12-06 13:16:27.697808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:21.382 [2024-12-06 13:16:27.698144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.382 pt3 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.382 "name": "raid_bdev1", 00:22:21.382 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:21.382 "strip_size_kb": 64, 00:22:21.382 "state": "online", 00:22:21.382 "raid_level": "raid5f", 00:22:21.382 "superblock": true, 00:22:21.382 "num_base_bdevs": 3, 00:22:21.382 "num_base_bdevs_discovered": 2, 00:22:21.382 "num_base_bdevs_operational": 2, 00:22:21.382 "base_bdevs_list": [ 00:22:21.382 { 00:22:21.382 "name": null, 00:22:21.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.382 "is_configured": false, 00:22:21.382 "data_offset": 2048, 00:22:21.382 "data_size": 63488 00:22:21.382 }, 00:22:21.382 { 00:22:21.382 "name": "pt2", 00:22:21.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.382 "is_configured": true, 00:22:21.382 "data_offset": 2048, 00:22:21.382 "data_size": 63488 00:22:21.382 }, 00:22:21.382 { 00:22:21.382 "name": "pt3", 00:22:21.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.382 "is_configured": true, 00:22:21.382 "data_offset": 2048, 00:22:21.382 "data_size": 63488 00:22:21.382 } 00:22:21.382 ] 00:22:21.382 }' 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.382 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 [2024-12-06 13:16:28.187812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:21.949 [2024-12-06 13:16:28.187991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:21.949 [2024-12-06 13:16:28.188123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.949 [2024-12-06 13:16:28.188214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.949 [2024-12-06 13:16:28.188231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 [2024-12-06 13:16:28.251855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:21.949 [2024-12-06 13:16:28.251932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.949 [2024-12-06 13:16:28.251963] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:21.949 [2024-12-06 13:16:28.251978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.949 [2024-12-06 13:16:28.254875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.949 [2024-12-06 13:16:28.254919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:21.949 [2024-12-06 13:16:28.255027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:21.949 [2024-12-06 13:16:28.255086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:21.949 [2024-12-06 13:16:28.255279] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:21.949 [2024-12-06 13:16:28.255309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:21.949 [2024-12-06 13:16:28.255336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:21.949 [2024-12-06 13:16:28.255403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:21.949 pt1 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.949 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.950 "name": "raid_bdev1", 00:22:21.950 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:21.950 "strip_size_kb": 64, 00:22:21.950 "state": "configuring", 00:22:21.950 "raid_level": "raid5f", 00:22:21.950 "superblock": true, 00:22:21.950 "num_base_bdevs": 3, 00:22:21.950 "num_base_bdevs_discovered": 1, 00:22:21.950 "num_base_bdevs_operational": 2, 00:22:21.950 "base_bdevs_list": [ 00:22:21.950 { 00:22:21.950 "name": null, 00:22:21.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.950 "is_configured": false, 00:22:21.950 "data_offset": 2048, 00:22:21.950 "data_size": 63488 00:22:21.950 }, 00:22:21.950 { 00:22:21.950 "name": "pt2", 00:22:21.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.950 "is_configured": true, 00:22:21.950 "data_offset": 2048, 00:22:21.950 "data_size": 63488 00:22:21.950 }, 00:22:21.950 { 00:22:21.950 "name": null, 00:22:21.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.950 "is_configured": false, 00:22:21.950 "data_offset": 2048, 00:22:21.950 "data_size": 63488 00:22:21.950 } 00:22:21.950 ] 00:22:21.950 }' 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.950 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.518 [2024-12-06 13:16:28.836034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:22.518 [2024-12-06 13:16:28.836122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.518 [2024-12-06 13:16:28.836157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:22.518 [2024-12-06 13:16:28.836173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.518 [2024-12-06 13:16:28.836799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.518 [2024-12-06 13:16:28.836841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:22.518 [2024-12-06 13:16:28.836950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:22.518 [2024-12-06 13:16:28.836982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:22.518 [2024-12-06 13:16:28.837138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:22.518 [2024-12-06 13:16:28.837164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:22.518 [2024-12-06 13:16:28.837508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:22.518 [2024-12-06 13:16:28.842386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:22.518 [2024-12-06 13:16:28.842424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:22.518 [2024-12-06 13:16:28.842737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.518 pt3 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.518 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.519 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.519 "name": "raid_bdev1", 00:22:22.519 "uuid": "08696742-a255-4a5d-b779-0220fa831f05", 00:22:22.519 "strip_size_kb": 64, 00:22:22.519 "state": "online", 00:22:22.519 "raid_level": "raid5f", 00:22:22.519 "superblock": true, 00:22:22.519 "num_base_bdevs": 3, 00:22:22.519 "num_base_bdevs_discovered": 2, 00:22:22.519 "num_base_bdevs_operational": 2, 00:22:22.519 "base_bdevs_list": [ 00:22:22.519 { 00:22:22.519 "name": null, 00:22:22.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.519 "is_configured": false, 00:22:22.519 "data_offset": 2048, 00:22:22.519 "data_size": 63488 00:22:22.519 }, 00:22:22.519 { 00:22:22.519 "name": "pt2", 00:22:22.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:22.519 "is_configured": true, 00:22:22.519 "data_offset": 2048, 00:22:22.519 "data_size": 63488 00:22:22.519 }, 00:22:22.519 { 00:22:22.519 "name": "pt3", 00:22:22.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:22.519 "is_configured": true, 00:22:22.519 "data_offset": 2048, 00:22:22.519 "data_size": 63488 00:22:22.519 } 00:22:22.519 ] 00:22:22.519 }' 00:22:22.519 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.519 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:23.086 [2024-12-06 13:16:29.384704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 08696742-a255-4a5d-b779-0220fa831f05 '!=' 08696742-a255-4a5d-b779-0220fa831f05 ']' 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81834 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81834 ']' 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81834 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81834 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.086 killing process with pid 81834 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81834' 00:22:23.086 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81834 00:22:23.087 [2024-12-06 13:16:29.458437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:23.087 [2024-12-06 13:16:29.458610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.087 [2024-12-06 13:16:29.458727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.087 [2024-12-06 13:16:29.458772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:23.087 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81834 00:22:23.345 [2024-12-06 13:16:29.734415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:24.280 13:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:24.280 00:22:24.280 real 0m8.632s 00:22:24.280 user 0m14.096s 00:22:24.280 sys 0m1.247s 00:22:24.280 13:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.280 ************************************ 00:22:24.280 END TEST raid5f_superblock_test 00:22:24.280 ************************************ 00:22:24.280 13:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.538 13:16:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:24.538 13:16:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:22:24.538 13:16:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:24.538 13:16:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.538 13:16:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.538 ************************************ 00:22:24.538 START TEST raid5f_rebuild_test 00:22:24.538 ************************************ 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:24.538 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82288 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82288 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82288 ']' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.539 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.539 [2024-12-06 13:16:30.930249] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:24.539 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:24.539 Zero copy mechanism will not be used. 00:22:24.539 [2024-12-06 13:16:30.930424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82288 ] 00:22:24.797 [2024-12-06 13:16:31.105325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.797 [2024-12-06 13:16:31.242250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.056 [2024-12-06 13:16:31.444614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.056 [2024-12-06 13:16:31.444700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.624 BaseBdev1_malloc 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.624 [2024-12-06 13:16:32.057792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:25.624 [2024-12-06 13:16:32.057870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.624 [2024-12-06 13:16:32.057904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:25.624 [2024-12-06 13:16:32.057925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.624 [2024-12-06 13:16:32.060817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.624 [2024-12-06 13:16:32.060870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:25.624 BaseBdev1 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.624 BaseBdev2_malloc 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.624 [2024-12-06 13:16:32.116246] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:25.624 [2024-12-06 13:16:32.116335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.624 [2024-12-06 13:16:32.116366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:25.624 [2024-12-06 13:16:32.116385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.624 [2024-12-06 13:16:32.119288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.624 [2024-12-06 13:16:32.119341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:25.624 BaseBdev2 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.624 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 BaseBdev3_malloc 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 [2024-12-06 13:16:32.184645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:25.883 [2024-12-06 13:16:32.184772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.883 [2024-12-06 13:16:32.184828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:25.883 [2024-12-06 13:16:32.184865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.883 [2024-12-06 13:16:32.188030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.883 [2024-12-06 13:16:32.188083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:25.883 BaseBdev3 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 spare_malloc 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 spare_delay 00:22:25.883 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.884 [2024-12-06 13:16:32.250142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.884 [2024-12-06 13:16:32.250232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.884 [2024-12-06 13:16:32.250270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:25.884 [2024-12-06 13:16:32.250289] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.884 [2024-12-06 13:16:32.253296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.884 [2024-12-06 13:16:32.253351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.884 spare 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.884 [2024-12-06 13:16:32.262310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.884 [2024-12-06 13:16:32.264828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.884 [2024-12-06 13:16:32.264931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.884 [2024-12-06 13:16:32.265073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:25.884 [2024-12-06 13:16:32.265092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:25.884 [2024-12-06 13:16:32.265474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:25.884 [2024-12-06 13:16:32.270736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:25.884 [2024-12-06 13:16:32.270774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:25.884 [2024-12-06 13:16:32.271031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.884 "name": "raid_bdev1", 00:22:25.884 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:25.884 "strip_size_kb": 64, 00:22:25.884 "state": "online", 00:22:25.884 "raid_level": "raid5f", 00:22:25.884 "superblock": false, 00:22:25.884 "num_base_bdevs": 3, 00:22:25.884 "num_base_bdevs_discovered": 3, 00:22:25.884 "num_base_bdevs_operational": 3, 00:22:25.884 "base_bdevs_list": [ 00:22:25.884 { 00:22:25.884 "name": "BaseBdev1", 00:22:25.884 "uuid": "cc08e5eb-29b2-520c-85b3-d0e0536fc4df", 00:22:25.884 "is_configured": true, 00:22:25.884 "data_offset": 0, 00:22:25.884 "data_size": 65536 00:22:25.884 }, 00:22:25.884 { 00:22:25.884 "name": "BaseBdev2", 00:22:25.884 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:25.884 "is_configured": true, 00:22:25.884 "data_offset": 0, 00:22:25.884 "data_size": 65536 00:22:25.884 }, 00:22:25.884 { 00:22:25.884 "name": "BaseBdev3", 00:22:25.884 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:25.884 "is_configured": true, 00:22:25.884 "data_offset": 0, 00:22:25.884 "data_size": 65536 00:22:25.884 } 00:22:25.884 ] 00:22:25.884 }' 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.884 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 [2024-12-06 13:16:32.765053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.451 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:26.709 [2024-12-06 13:16:33.176966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:26.709 /dev/nbd0 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.709 1+0 records in 00:22:26.709 1+0 records out 00:22:26.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043297 s, 9.5 MB/s 00:22:26.709 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:26.968 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:27.534 512+0 records in 00:22:27.534 512+0 records out 00:22:27.534 67108864 bytes (67 MB, 64 MiB) copied, 0.531358 s, 126 MB/s 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.534 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:27.534 [2024-12-06 13:16:34.056544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.534 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.810 [2024-12-06 13:16:34.068063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.810 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.811 "name": "raid_bdev1", 00:22:27.811 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:27.811 "strip_size_kb": 64, 00:22:27.811 "state": "online", 00:22:27.811 "raid_level": "raid5f", 00:22:27.811 "superblock": false, 00:22:27.811 "num_base_bdevs": 3, 00:22:27.811 "num_base_bdevs_discovered": 2, 00:22:27.811 "num_base_bdevs_operational": 2, 00:22:27.811 "base_bdevs_list": [ 00:22:27.811 { 00:22:27.811 "name": null, 00:22:27.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.811 "is_configured": false, 00:22:27.811 "data_offset": 0, 00:22:27.811 "data_size": 65536 00:22:27.811 }, 00:22:27.811 { 00:22:27.811 "name": "BaseBdev2", 00:22:27.811 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:27.811 "is_configured": true, 00:22:27.811 "data_offset": 0, 00:22:27.811 "data_size": 65536 00:22:27.811 }, 00:22:27.811 { 00:22:27.811 "name": "BaseBdev3", 00:22:27.811 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:27.811 "is_configured": true, 00:22:27.811 "data_offset": 0, 00:22:27.811 "data_size": 65536 00:22:27.811 } 00:22:27.811 ] 00:22:27.811 }' 00:22:27.811 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.811 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.378 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:28.378 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.378 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.378 [2024-12-06 13:16:34.620223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.378 [2024-12-06 13:16:34.636145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:22:28.378 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.378 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:28.378 [2024-12-06 13:16:34.643713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.321 "name": "raid_bdev1", 00:22:29.321 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:29.321 "strip_size_kb": 64, 00:22:29.321 "state": "online", 00:22:29.321 "raid_level": "raid5f", 00:22:29.321 "superblock": false, 00:22:29.321 "num_base_bdevs": 3, 00:22:29.321 "num_base_bdevs_discovered": 3, 00:22:29.321 "num_base_bdevs_operational": 3, 00:22:29.321 "process": { 00:22:29.321 "type": "rebuild", 00:22:29.321 "target": "spare", 00:22:29.321 "progress": { 00:22:29.321 "blocks": 18432, 00:22:29.321 "percent": 14 00:22:29.321 } 00:22:29.321 }, 00:22:29.321 "base_bdevs_list": [ 00:22:29.321 { 00:22:29.321 "name": "spare", 00:22:29.321 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:29.321 "is_configured": true, 00:22:29.321 "data_offset": 0, 00:22:29.321 "data_size": 65536 00:22:29.321 }, 00:22:29.321 { 00:22:29.321 "name": "BaseBdev2", 00:22:29.321 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:29.321 "is_configured": true, 00:22:29.321 "data_offset": 0, 00:22:29.321 "data_size": 65536 00:22:29.321 }, 00:22:29.321 { 00:22:29.321 "name": "BaseBdev3", 00:22:29.321 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:29.321 "is_configured": true, 00:22:29.321 "data_offset": 0, 00:22:29.321 "data_size": 65536 00:22:29.321 } 00:22:29.321 ] 00:22:29.321 }' 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.321 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.321 [2024-12-06 13:16:35.802158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.579 [2024-12-06 13:16:35.859834] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:29.579 [2024-12-06 13:16:35.859944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.579 [2024-12-06 13:16:35.859976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.579 [2024-12-06 13:16:35.859990] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.579 "name": "raid_bdev1", 00:22:29.579 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:29.579 "strip_size_kb": 64, 00:22:29.579 "state": "online", 00:22:29.579 "raid_level": "raid5f", 00:22:29.579 "superblock": false, 00:22:29.579 "num_base_bdevs": 3, 00:22:29.579 "num_base_bdevs_discovered": 2, 00:22:29.579 "num_base_bdevs_operational": 2, 00:22:29.579 "base_bdevs_list": [ 00:22:29.579 { 00:22:29.579 "name": null, 00:22:29.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.579 "is_configured": false, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 65536 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev2", 00:22:29.579 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:29.579 "is_configured": true, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 65536 00:22:29.579 }, 00:22:29.579 { 00:22:29.579 "name": "BaseBdev3", 00:22:29.579 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:29.579 "is_configured": true, 00:22:29.579 "data_offset": 0, 00:22:29.579 "data_size": 65536 00:22:29.579 } 00:22:29.579 ] 00:22:29.579 }' 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.579 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.144 "name": "raid_bdev1", 00:22:30.144 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:30.144 "strip_size_kb": 64, 00:22:30.144 "state": "online", 00:22:30.144 "raid_level": "raid5f", 00:22:30.144 "superblock": false, 00:22:30.144 "num_base_bdevs": 3, 00:22:30.144 "num_base_bdevs_discovered": 2, 00:22:30.144 "num_base_bdevs_operational": 2, 00:22:30.144 "base_bdevs_list": [ 00:22:30.144 { 00:22:30.144 "name": null, 00:22:30.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.144 "is_configured": false, 00:22:30.144 "data_offset": 0, 00:22:30.144 "data_size": 65536 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "name": "BaseBdev2", 00:22:30.144 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:30.144 "is_configured": true, 00:22:30.144 "data_offset": 0, 00:22:30.144 "data_size": 65536 00:22:30.144 }, 00:22:30.144 { 00:22:30.144 "name": "BaseBdev3", 00:22:30.144 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:30.144 "is_configured": true, 00:22:30.144 "data_offset": 0, 00:22:30.144 "data_size": 65536 00:22:30.144 } 00:22:30.144 ] 00:22:30.144 }' 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.144 [2024-12-06 13:16:36.568473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.144 [2024-12-06 13:16:36.583644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.144 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:30.144 [2024-12-06 13:16:36.591224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.079 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.337 "name": "raid_bdev1", 00:22:31.337 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:31.337 "strip_size_kb": 64, 00:22:31.337 "state": "online", 00:22:31.337 "raid_level": "raid5f", 00:22:31.337 "superblock": false, 00:22:31.337 "num_base_bdevs": 3, 00:22:31.337 "num_base_bdevs_discovered": 3, 00:22:31.337 "num_base_bdevs_operational": 3, 00:22:31.337 "process": { 00:22:31.337 "type": "rebuild", 00:22:31.337 "target": "spare", 00:22:31.337 "progress": { 00:22:31.337 "blocks": 18432, 00:22:31.337 "percent": 14 00:22:31.337 } 00:22:31.337 }, 00:22:31.337 "base_bdevs_list": [ 00:22:31.337 { 00:22:31.337 "name": "spare", 00:22:31.337 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:31.337 "is_configured": true, 00:22:31.337 "data_offset": 0, 00:22:31.337 "data_size": 65536 00:22:31.337 }, 00:22:31.337 { 00:22:31.337 "name": "BaseBdev2", 00:22:31.337 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:31.337 "is_configured": true, 00:22:31.337 "data_offset": 0, 00:22:31.337 "data_size": 65536 00:22:31.337 }, 00:22:31.337 { 00:22:31.337 "name": "BaseBdev3", 00:22:31.337 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:31.337 "is_configured": true, 00:22:31.337 "data_offset": 0, 00:22:31.337 "data_size": 65536 00:22:31.337 } 00:22:31.337 ] 00:22:31.337 }' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.337 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.337 "name": "raid_bdev1", 00:22:31.337 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:31.337 "strip_size_kb": 64, 00:22:31.337 "state": "online", 00:22:31.338 "raid_level": "raid5f", 00:22:31.338 "superblock": false, 00:22:31.338 "num_base_bdevs": 3, 00:22:31.338 "num_base_bdevs_discovered": 3, 00:22:31.338 "num_base_bdevs_operational": 3, 00:22:31.338 "process": { 00:22:31.338 "type": "rebuild", 00:22:31.338 "target": "spare", 00:22:31.338 "progress": { 00:22:31.338 "blocks": 22528, 00:22:31.338 "percent": 17 00:22:31.338 } 00:22:31.338 }, 00:22:31.338 "base_bdevs_list": [ 00:22:31.338 { 00:22:31.338 "name": "spare", 00:22:31.338 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:31.338 "is_configured": true, 00:22:31.338 "data_offset": 0, 00:22:31.338 "data_size": 65536 00:22:31.338 }, 00:22:31.338 { 00:22:31.338 "name": "BaseBdev2", 00:22:31.338 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:31.338 "is_configured": true, 00:22:31.338 "data_offset": 0, 00:22:31.338 "data_size": 65536 00:22:31.338 }, 00:22:31.338 { 00:22:31.338 "name": "BaseBdev3", 00:22:31.338 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:31.338 "is_configured": true, 00:22:31.338 "data_offset": 0, 00:22:31.338 "data_size": 65536 00:22:31.338 } 00:22:31.338 ] 00:22:31.338 }' 00:22:31.338 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.338 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.338 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.596 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.596 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.530 "name": "raid_bdev1", 00:22:32.530 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:32.530 "strip_size_kb": 64, 00:22:32.530 "state": "online", 00:22:32.530 "raid_level": "raid5f", 00:22:32.530 "superblock": false, 00:22:32.530 "num_base_bdevs": 3, 00:22:32.530 "num_base_bdevs_discovered": 3, 00:22:32.530 "num_base_bdevs_operational": 3, 00:22:32.530 "process": { 00:22:32.530 "type": "rebuild", 00:22:32.530 "target": "spare", 00:22:32.530 "progress": { 00:22:32.530 "blocks": 45056, 00:22:32.530 "percent": 34 00:22:32.530 } 00:22:32.530 }, 00:22:32.530 "base_bdevs_list": [ 00:22:32.530 { 00:22:32.530 "name": "spare", 00:22:32.530 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:32.530 "is_configured": true, 00:22:32.530 "data_offset": 0, 00:22:32.530 "data_size": 65536 00:22:32.530 }, 00:22:32.530 { 00:22:32.530 "name": "BaseBdev2", 00:22:32.530 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:32.530 "is_configured": true, 00:22:32.530 "data_offset": 0, 00:22:32.530 "data_size": 65536 00:22:32.530 }, 00:22:32.530 { 00:22:32.530 "name": "BaseBdev3", 00:22:32.530 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:32.530 "is_configured": true, 00:22:32.530 "data_offset": 0, 00:22:32.530 "data_size": 65536 00:22:32.530 } 00:22:32.530 ] 00:22:32.530 }' 00:22:32.530 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.530 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.530 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.787 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.787 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.827 "name": "raid_bdev1", 00:22:33.827 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:33.827 "strip_size_kb": 64, 00:22:33.827 "state": "online", 00:22:33.827 "raid_level": "raid5f", 00:22:33.827 "superblock": false, 00:22:33.827 "num_base_bdevs": 3, 00:22:33.827 "num_base_bdevs_discovered": 3, 00:22:33.827 "num_base_bdevs_operational": 3, 00:22:33.827 "process": { 00:22:33.827 "type": "rebuild", 00:22:33.827 "target": "spare", 00:22:33.827 "progress": { 00:22:33.827 "blocks": 69632, 00:22:33.827 "percent": 53 00:22:33.827 } 00:22:33.827 }, 00:22:33.827 "base_bdevs_list": [ 00:22:33.827 { 00:22:33.827 "name": "spare", 00:22:33.827 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:33.827 "is_configured": true, 00:22:33.827 "data_offset": 0, 00:22:33.827 "data_size": 65536 00:22:33.827 }, 00:22:33.827 { 00:22:33.827 "name": "BaseBdev2", 00:22:33.827 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:33.827 "is_configured": true, 00:22:33.827 "data_offset": 0, 00:22:33.827 "data_size": 65536 00:22:33.827 }, 00:22:33.827 { 00:22:33.827 "name": "BaseBdev3", 00:22:33.827 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:33.827 "is_configured": true, 00:22:33.827 "data_offset": 0, 00:22:33.827 "data_size": 65536 00:22:33.827 } 00:22:33.827 ] 00:22:33.827 }' 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.827 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.760 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.019 "name": "raid_bdev1", 00:22:35.019 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:35.019 "strip_size_kb": 64, 00:22:35.019 "state": "online", 00:22:35.019 "raid_level": "raid5f", 00:22:35.019 "superblock": false, 00:22:35.019 "num_base_bdevs": 3, 00:22:35.019 "num_base_bdevs_discovered": 3, 00:22:35.019 "num_base_bdevs_operational": 3, 00:22:35.019 "process": { 00:22:35.019 "type": "rebuild", 00:22:35.019 "target": "spare", 00:22:35.019 "progress": { 00:22:35.019 "blocks": 94208, 00:22:35.019 "percent": 71 00:22:35.019 } 00:22:35.019 }, 00:22:35.019 "base_bdevs_list": [ 00:22:35.019 { 00:22:35.019 "name": "spare", 00:22:35.019 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:35.019 "is_configured": true, 00:22:35.019 "data_offset": 0, 00:22:35.019 "data_size": 65536 00:22:35.019 }, 00:22:35.019 { 00:22:35.019 "name": "BaseBdev2", 00:22:35.019 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:35.019 "is_configured": true, 00:22:35.019 "data_offset": 0, 00:22:35.019 "data_size": 65536 00:22:35.019 }, 00:22:35.019 { 00:22:35.019 "name": "BaseBdev3", 00:22:35.019 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:35.019 "is_configured": true, 00:22:35.019 "data_offset": 0, 00:22:35.019 "data_size": 65536 00:22:35.019 } 00:22:35.019 ] 00:22:35.019 }' 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.019 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.953 "name": "raid_bdev1", 00:22:35.953 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:35.953 "strip_size_kb": 64, 00:22:35.953 "state": "online", 00:22:35.953 "raid_level": "raid5f", 00:22:35.953 "superblock": false, 00:22:35.953 "num_base_bdevs": 3, 00:22:35.953 "num_base_bdevs_discovered": 3, 00:22:35.953 "num_base_bdevs_operational": 3, 00:22:35.953 "process": { 00:22:35.953 "type": "rebuild", 00:22:35.953 "target": "spare", 00:22:35.953 "progress": { 00:22:35.953 "blocks": 116736, 00:22:35.953 "percent": 89 00:22:35.953 } 00:22:35.953 }, 00:22:35.953 "base_bdevs_list": [ 00:22:35.953 { 00:22:35.953 "name": "spare", 00:22:35.953 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:35.953 "is_configured": true, 00:22:35.953 "data_offset": 0, 00:22:35.953 "data_size": 65536 00:22:35.953 }, 00:22:35.953 { 00:22:35.953 "name": "BaseBdev2", 00:22:35.953 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:35.953 "is_configured": true, 00:22:35.953 "data_offset": 0, 00:22:35.953 "data_size": 65536 00:22:35.953 }, 00:22:35.953 { 00:22:35.953 "name": "BaseBdev3", 00:22:35.953 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:35.953 "is_configured": true, 00:22:35.953 "data_offset": 0, 00:22:35.953 "data_size": 65536 00:22:35.953 } 00:22:35.953 ] 00:22:35.953 }' 00:22:35.953 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:36.213 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:36.213 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:36.213 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:36.213 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:36.781 [2024-12-06 13:16:43.073802] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:36.781 [2024-12-06 13:16:43.073939] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:36.781 [2024-12-06 13:16:43.074003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.370 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.370 "name": "raid_bdev1", 00:22:37.370 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:37.370 "strip_size_kb": 64, 00:22:37.370 "state": "online", 00:22:37.370 "raid_level": "raid5f", 00:22:37.370 "superblock": false, 00:22:37.370 "num_base_bdevs": 3, 00:22:37.370 "num_base_bdevs_discovered": 3, 00:22:37.370 "num_base_bdevs_operational": 3, 00:22:37.370 "base_bdevs_list": [ 00:22:37.370 { 00:22:37.370 "name": "spare", 00:22:37.370 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:37.370 "is_configured": true, 00:22:37.370 "data_offset": 0, 00:22:37.370 "data_size": 65536 00:22:37.370 }, 00:22:37.370 { 00:22:37.370 "name": "BaseBdev2", 00:22:37.370 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:37.370 "is_configured": true, 00:22:37.370 "data_offset": 0, 00:22:37.370 "data_size": 65536 00:22:37.370 }, 00:22:37.370 { 00:22:37.371 "name": "BaseBdev3", 00:22:37.371 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:37.371 "is_configured": true, 00:22:37.371 "data_offset": 0, 00:22:37.371 "data_size": 65536 00:22:37.371 } 00:22:37.371 ] 00:22:37.371 }' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.371 "name": "raid_bdev1", 00:22:37.371 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:37.371 "strip_size_kb": 64, 00:22:37.371 "state": "online", 00:22:37.371 "raid_level": "raid5f", 00:22:37.371 "superblock": false, 00:22:37.371 "num_base_bdevs": 3, 00:22:37.371 "num_base_bdevs_discovered": 3, 00:22:37.371 "num_base_bdevs_operational": 3, 00:22:37.371 "base_bdevs_list": [ 00:22:37.371 { 00:22:37.371 "name": "spare", 00:22:37.371 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:37.371 "is_configured": true, 00:22:37.371 "data_offset": 0, 00:22:37.371 "data_size": 65536 00:22:37.371 }, 00:22:37.371 { 00:22:37.371 "name": "BaseBdev2", 00:22:37.371 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:37.371 "is_configured": true, 00:22:37.371 "data_offset": 0, 00:22:37.371 "data_size": 65536 00:22:37.371 }, 00:22:37.371 { 00:22:37.371 "name": "BaseBdev3", 00:22:37.371 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:37.371 "is_configured": true, 00:22:37.371 "data_offset": 0, 00:22:37.371 "data_size": 65536 00:22:37.371 } 00:22:37.371 ] 00:22:37.371 }' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:37.371 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.646 "name": "raid_bdev1", 00:22:37.646 "uuid": "bda4c637-8f62-421c-b00a-da5c228da849", 00:22:37.646 "strip_size_kb": 64, 00:22:37.646 "state": "online", 00:22:37.646 "raid_level": "raid5f", 00:22:37.646 "superblock": false, 00:22:37.646 "num_base_bdevs": 3, 00:22:37.646 "num_base_bdevs_discovered": 3, 00:22:37.646 "num_base_bdevs_operational": 3, 00:22:37.646 "base_bdevs_list": [ 00:22:37.646 { 00:22:37.646 "name": "spare", 00:22:37.646 "uuid": "2557e3d0-0cd2-58a8-ac30-57d5e2e91c44", 00:22:37.646 "is_configured": true, 00:22:37.646 "data_offset": 0, 00:22:37.646 "data_size": 65536 00:22:37.646 }, 00:22:37.646 { 00:22:37.646 "name": "BaseBdev2", 00:22:37.646 "uuid": "3605c65d-146a-564d-9a42-2332993e8e4b", 00:22:37.646 "is_configured": true, 00:22:37.646 "data_offset": 0, 00:22:37.646 "data_size": 65536 00:22:37.646 }, 00:22:37.646 { 00:22:37.646 "name": "BaseBdev3", 00:22:37.646 "uuid": "58276099-1b7a-59f1-b19a-a8424732064e", 00:22:37.646 "is_configured": true, 00:22:37.646 "data_offset": 0, 00:22:37.646 "data_size": 65536 00:22:37.646 } 00:22:37.646 ] 00:22:37.646 }' 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.646 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.906 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:37.906 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.906 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.906 [2024-12-06 13:16:44.425214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.906 [2024-12-06 13:16:44.425264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.906 [2024-12-06 13:16:44.425375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.906 [2024-12-06 13:16:44.425506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.906 [2024-12-06 13:16:44.425534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:37.906 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:38.166 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:38.426 /dev/nbd0 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.426 1+0 records in 00:22:38.426 1+0 records out 00:22:38.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550209 s, 7.4 MB/s 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:38.426 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:38.686 /dev/nbd1 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.686 1+0 records in 00:22:38.686 1+0 records out 00:22:38.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495191 s, 8.3 MB/s 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:38.686 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.944 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:39.201 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:39.202 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:39.202 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:39.778 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82288 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82288 ']' 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82288 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82288 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.779 killing process with pid 82288 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82288' 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82288 00:22:39.779 Received shutdown signal, test time was about 60.000000 seconds 00:22:39.779 00:22:39.779 Latency(us) 00:22:39.779 [2024-12-06T13:16:46.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.779 [2024-12-06T13:16:46.308Z] =================================================================================================================== 00:22:39.779 [2024-12-06T13:16:46.308Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.779 [2024-12-06 13:16:46.068330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.779 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82288 00:22:40.038 [2024-12-06 13:16:46.429685] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:41.510 00:22:41.510 real 0m16.674s 00:22:41.510 user 0m21.366s 00:22:41.510 sys 0m2.125s 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.510 ************************************ 00:22:41.510 END TEST raid5f_rebuild_test 00:22:41.510 ************************************ 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.510 13:16:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:41.510 13:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:41.510 13:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.510 13:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:41.510 ************************************ 00:22:41.510 START TEST raid5f_rebuild_test_sb 00:22:41.510 ************************************ 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82734 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82734 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:41.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82734 ']' 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.510 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.511 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.511 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.511 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.511 [2024-12-06 13:16:47.697614] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:22:41.511 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:41.511 Zero copy mechanism will not be used. 00:22:41.511 [2024-12-06 13:16:47.697847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82734 ] 00:22:41.511 [2024-12-06 13:16:47.918380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.769 [2024-12-06 13:16:48.098141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.028 [2024-12-06 13:16:48.321391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.028 [2024-12-06 13:16:48.321503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 BaseBdev1_malloc 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 [2024-12-06 13:16:48.780104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:42.321 [2024-12-06 13:16:48.780197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.321 [2024-12-06 13:16:48.780232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:42.321 [2024-12-06 13:16:48.780250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.321 [2024-12-06 13:16:48.783107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.321 [2024-12-06 13:16:48.783168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:42.321 BaseBdev1 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 BaseBdev2_malloc 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.321 [2024-12-06 13:16:48.828583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:42.321 [2024-12-06 13:16:48.828699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.321 [2024-12-06 13:16:48.828734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:42.321 [2024-12-06 13:16:48.828753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.321 [2024-12-06 13:16:48.831835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.321 [2024-12-06 13:16:48.831886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:42.321 BaseBdev2 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.321 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 BaseBdev3_malloc 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 [2024-12-06 13:16:48.897594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:42.585 [2024-12-06 13:16:48.897675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.585 [2024-12-06 13:16:48.897716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:42.585 [2024-12-06 13:16:48.897735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.585 [2024-12-06 13:16:48.900624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.585 [2024-12-06 13:16:48.900677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:42.585 BaseBdev3 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 spare_malloc 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 spare_delay 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 [2024-12-06 13:16:48.962931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:42.585 [2024-12-06 13:16:48.963005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.585 [2024-12-06 13:16:48.963048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:42.585 [2024-12-06 13:16:48.963066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.585 [2024-12-06 13:16:48.965912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.585 [2024-12-06 13:16:48.965966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:42.585 spare 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.585 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.585 [2024-12-06 13:16:48.971049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.585 [2024-12-06 13:16:48.973708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.585 [2024-12-06 13:16:48.973811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.585 [2024-12-06 13:16:48.974092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:42.585 [2024-12-06 13:16:48.974123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:42.585 [2024-12-06 13:16:48.974522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:42.585 [2024-12-06 13:16:48.979784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:42.585 [2024-12-06 13:16:48.979826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:42.586 [2024-12-06 13:16:48.980109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.586 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.586 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.586 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.586 "name": "raid_bdev1", 00:22:42.586 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:42.586 "strip_size_kb": 64, 00:22:42.586 "state": "online", 00:22:42.586 "raid_level": "raid5f", 00:22:42.586 "superblock": true, 00:22:42.586 "num_base_bdevs": 3, 00:22:42.586 "num_base_bdevs_discovered": 3, 00:22:42.586 "num_base_bdevs_operational": 3, 00:22:42.586 "base_bdevs_list": [ 00:22:42.586 { 00:22:42.586 "name": "BaseBdev1", 00:22:42.586 "uuid": "934492d2-4bda-5278-914b-38af1c896d9d", 00:22:42.586 "is_configured": true, 00:22:42.586 "data_offset": 2048, 00:22:42.586 "data_size": 63488 00:22:42.586 }, 00:22:42.586 { 00:22:42.586 "name": "BaseBdev2", 00:22:42.586 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:42.586 "is_configured": true, 00:22:42.586 "data_offset": 2048, 00:22:42.586 "data_size": 63488 00:22:42.586 }, 00:22:42.586 { 00:22:42.586 "name": "BaseBdev3", 00:22:42.586 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:42.586 "is_configured": true, 00:22:42.586 "data_offset": 2048, 00:22:42.586 "data_size": 63488 00:22:42.586 } 00:22:42.586 ] 00:22:42.586 }' 00:22:42.586 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.586 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.153 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:43.153 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.153 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.153 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.154 [2024-12-06 13:16:49.482590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.154 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:43.412 [2024-12-06 13:16:49.886503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:43.412 /dev/nbd0 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:43.412 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:43.671 1+0 records in 00:22:43.671 1+0 records out 00:22:43.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038437 s, 10.7 MB/s 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:43.671 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:43.930 496+0 records in 00:22:43.930 496+0 records out 00:22:43.930 65011712 bytes (65 MB, 62 MiB) copied, 0.481955 s, 135 MB/s 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:43.930 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:44.496 [2024-12-06 13:16:50.754655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.496 [2024-12-06 13:16:50.788543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.496 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.496 "name": "raid_bdev1", 00:22:44.496 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:44.496 "strip_size_kb": 64, 00:22:44.496 "state": "online", 00:22:44.496 "raid_level": "raid5f", 00:22:44.496 "superblock": true, 00:22:44.496 "num_base_bdevs": 3, 00:22:44.496 "num_base_bdevs_discovered": 2, 00:22:44.496 "num_base_bdevs_operational": 2, 00:22:44.496 "base_bdevs_list": [ 00:22:44.497 { 00:22:44.497 "name": null, 00:22:44.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.497 "is_configured": false, 00:22:44.497 "data_offset": 0, 00:22:44.497 "data_size": 63488 00:22:44.497 }, 00:22:44.497 { 00:22:44.497 "name": "BaseBdev2", 00:22:44.497 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:44.497 "is_configured": true, 00:22:44.497 "data_offset": 2048, 00:22:44.497 "data_size": 63488 00:22:44.497 }, 00:22:44.497 { 00:22:44.497 "name": "BaseBdev3", 00:22:44.497 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:44.497 "is_configured": true, 00:22:44.497 "data_offset": 2048, 00:22:44.497 "data_size": 63488 00:22:44.497 } 00:22:44.497 ] 00:22:44.497 }' 00:22:44.497 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.497 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.756 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:44.756 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.756 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.756 [2024-12-06 13:16:51.276718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.015 [2024-12-06 13:16:51.300127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:45.015 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.015 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:45.015 [2024-12-06 13:16:51.311681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.952 "name": "raid_bdev1", 00:22:45.952 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:45.952 "strip_size_kb": 64, 00:22:45.952 "state": "online", 00:22:45.952 "raid_level": "raid5f", 00:22:45.952 "superblock": true, 00:22:45.952 "num_base_bdevs": 3, 00:22:45.952 "num_base_bdevs_discovered": 3, 00:22:45.952 "num_base_bdevs_operational": 3, 00:22:45.952 "process": { 00:22:45.952 "type": "rebuild", 00:22:45.952 "target": "spare", 00:22:45.952 "progress": { 00:22:45.952 "blocks": 18432, 00:22:45.952 "percent": 14 00:22:45.952 } 00:22:45.952 }, 00:22:45.952 "base_bdevs_list": [ 00:22:45.952 { 00:22:45.952 "name": "spare", 00:22:45.952 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:45.952 "is_configured": true, 00:22:45.952 "data_offset": 2048, 00:22:45.952 "data_size": 63488 00:22:45.952 }, 00:22:45.952 { 00:22:45.952 "name": "BaseBdev2", 00:22:45.952 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:45.952 "is_configured": true, 00:22:45.952 "data_offset": 2048, 00:22:45.952 "data_size": 63488 00:22:45.952 }, 00:22:45.952 { 00:22:45.952 "name": "BaseBdev3", 00:22:45.952 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:45.952 "is_configured": true, 00:22:45.952 "data_offset": 2048, 00:22:45.952 "data_size": 63488 00:22:45.952 } 00:22:45.952 ] 00:22:45.952 }' 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.952 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.952 [2024-12-06 13:16:52.470801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.211 [2024-12-06 13:16:52.531211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:46.211 [2024-12-06 13:16:52.531339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.211 [2024-12-06 13:16:52.531375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:46.211 [2024-12-06 13:16:52.531389] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.211 "name": "raid_bdev1", 00:22:46.211 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:46.211 "strip_size_kb": 64, 00:22:46.211 "state": "online", 00:22:46.211 "raid_level": "raid5f", 00:22:46.211 "superblock": true, 00:22:46.211 "num_base_bdevs": 3, 00:22:46.211 "num_base_bdevs_discovered": 2, 00:22:46.211 "num_base_bdevs_operational": 2, 00:22:46.211 "base_bdevs_list": [ 00:22:46.211 { 00:22:46.211 "name": null, 00:22:46.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.211 "is_configured": false, 00:22:46.211 "data_offset": 0, 00:22:46.211 "data_size": 63488 00:22:46.211 }, 00:22:46.211 { 00:22:46.211 "name": "BaseBdev2", 00:22:46.211 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:46.211 "is_configured": true, 00:22:46.211 "data_offset": 2048, 00:22:46.211 "data_size": 63488 00:22:46.211 }, 00:22:46.211 { 00:22:46.211 "name": "BaseBdev3", 00:22:46.211 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:46.211 "is_configured": true, 00:22:46.211 "data_offset": 2048, 00:22:46.211 "data_size": 63488 00:22:46.211 } 00:22:46.211 ] 00:22:46.211 }' 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.211 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:46.779 "name": "raid_bdev1", 00:22:46.779 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:46.779 "strip_size_kb": 64, 00:22:46.779 "state": "online", 00:22:46.779 "raid_level": "raid5f", 00:22:46.779 "superblock": true, 00:22:46.779 "num_base_bdevs": 3, 00:22:46.779 "num_base_bdevs_discovered": 2, 00:22:46.779 "num_base_bdevs_operational": 2, 00:22:46.779 "base_bdevs_list": [ 00:22:46.779 { 00:22:46.779 "name": null, 00:22:46.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.779 "is_configured": false, 00:22:46.779 "data_offset": 0, 00:22:46.779 "data_size": 63488 00:22:46.779 }, 00:22:46.779 { 00:22:46.779 "name": "BaseBdev2", 00:22:46.779 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:46.779 "is_configured": true, 00:22:46.779 "data_offset": 2048, 00:22:46.779 "data_size": 63488 00:22:46.779 }, 00:22:46.779 { 00:22:46.779 "name": "BaseBdev3", 00:22:46.779 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:46.779 "is_configured": true, 00:22:46.779 "data_offset": 2048, 00:22:46.779 "data_size": 63488 00:22:46.779 } 00:22:46.779 ] 00:22:46.779 }' 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:46.779 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.780 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.780 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.780 [2024-12-06 13:16:53.220906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.780 [2024-12-06 13:16:53.236589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:46.780 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.780 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:46.780 [2024-12-06 13:16:53.244232] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.156 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.156 "name": "raid_bdev1", 00:22:48.156 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:48.156 "strip_size_kb": 64, 00:22:48.156 "state": "online", 00:22:48.156 "raid_level": "raid5f", 00:22:48.156 "superblock": true, 00:22:48.156 "num_base_bdevs": 3, 00:22:48.156 "num_base_bdevs_discovered": 3, 00:22:48.156 "num_base_bdevs_operational": 3, 00:22:48.156 "process": { 00:22:48.156 "type": "rebuild", 00:22:48.156 "target": "spare", 00:22:48.156 "progress": { 00:22:48.156 "blocks": 18432, 00:22:48.156 "percent": 14 00:22:48.156 } 00:22:48.156 }, 00:22:48.156 "base_bdevs_list": [ 00:22:48.156 { 00:22:48.156 "name": "spare", 00:22:48.156 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:48.156 "is_configured": true, 00:22:48.156 "data_offset": 2048, 00:22:48.156 "data_size": 63488 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "name": "BaseBdev2", 00:22:48.156 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:48.156 "is_configured": true, 00:22:48.156 "data_offset": 2048, 00:22:48.156 "data_size": 63488 00:22:48.156 }, 00:22:48.156 { 00:22:48.156 "name": "BaseBdev3", 00:22:48.156 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:48.157 "is_configured": true, 00:22:48.157 "data_offset": 2048, 00:22:48.157 "data_size": 63488 00:22:48.157 } 00:22:48.157 ] 00:22:48.157 }' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:48.157 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=626 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:48.157 "name": "raid_bdev1", 00:22:48.157 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:48.157 "strip_size_kb": 64, 00:22:48.157 "state": "online", 00:22:48.157 "raid_level": "raid5f", 00:22:48.157 "superblock": true, 00:22:48.157 "num_base_bdevs": 3, 00:22:48.157 "num_base_bdevs_discovered": 3, 00:22:48.157 "num_base_bdevs_operational": 3, 00:22:48.157 "process": { 00:22:48.157 "type": "rebuild", 00:22:48.157 "target": "spare", 00:22:48.157 "progress": { 00:22:48.157 "blocks": 22528, 00:22:48.157 "percent": 17 00:22:48.157 } 00:22:48.157 }, 00:22:48.157 "base_bdevs_list": [ 00:22:48.157 { 00:22:48.157 "name": "spare", 00:22:48.157 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:48.157 "is_configured": true, 00:22:48.157 "data_offset": 2048, 00:22:48.157 "data_size": 63488 00:22:48.157 }, 00:22:48.157 { 00:22:48.157 "name": "BaseBdev2", 00:22:48.157 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:48.157 "is_configured": true, 00:22:48.157 "data_offset": 2048, 00:22:48.157 "data_size": 63488 00:22:48.157 }, 00:22:48.157 { 00:22:48.157 "name": "BaseBdev3", 00:22:48.157 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:48.157 "is_configured": true, 00:22:48.157 "data_offset": 2048, 00:22:48.157 "data_size": 63488 00:22:48.157 } 00:22:48.157 ] 00:22:48.157 }' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.157 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.116 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.374 "name": "raid_bdev1", 00:22:49.374 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:49.374 "strip_size_kb": 64, 00:22:49.374 "state": "online", 00:22:49.374 "raid_level": "raid5f", 00:22:49.374 "superblock": true, 00:22:49.374 "num_base_bdevs": 3, 00:22:49.374 "num_base_bdevs_discovered": 3, 00:22:49.374 "num_base_bdevs_operational": 3, 00:22:49.374 "process": { 00:22:49.374 "type": "rebuild", 00:22:49.374 "target": "spare", 00:22:49.374 "progress": { 00:22:49.374 "blocks": 47104, 00:22:49.374 "percent": 37 00:22:49.374 } 00:22:49.374 }, 00:22:49.374 "base_bdevs_list": [ 00:22:49.374 { 00:22:49.374 "name": "spare", 00:22:49.374 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:49.374 "is_configured": true, 00:22:49.374 "data_offset": 2048, 00:22:49.374 "data_size": 63488 00:22:49.374 }, 00:22:49.374 { 00:22:49.374 "name": "BaseBdev2", 00:22:49.374 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:49.374 "is_configured": true, 00:22:49.374 "data_offset": 2048, 00:22:49.374 "data_size": 63488 00:22:49.374 }, 00:22:49.374 { 00:22:49.374 "name": "BaseBdev3", 00:22:49.374 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:49.374 "is_configured": true, 00:22:49.374 "data_offset": 2048, 00:22:49.374 "data_size": 63488 00:22:49.374 } 00:22:49.374 ] 00:22:49.374 }' 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.374 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.306 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:50.564 "name": "raid_bdev1", 00:22:50.564 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:50.564 "strip_size_kb": 64, 00:22:50.564 "state": "online", 00:22:50.564 "raid_level": "raid5f", 00:22:50.564 "superblock": true, 00:22:50.564 "num_base_bdevs": 3, 00:22:50.564 "num_base_bdevs_discovered": 3, 00:22:50.564 "num_base_bdevs_operational": 3, 00:22:50.564 "process": { 00:22:50.564 "type": "rebuild", 00:22:50.564 "target": "spare", 00:22:50.564 "progress": { 00:22:50.564 "blocks": 69632, 00:22:50.564 "percent": 54 00:22:50.564 } 00:22:50.564 }, 00:22:50.564 "base_bdevs_list": [ 00:22:50.564 { 00:22:50.564 "name": "spare", 00:22:50.564 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:50.564 "is_configured": true, 00:22:50.564 "data_offset": 2048, 00:22:50.564 "data_size": 63488 00:22:50.564 }, 00:22:50.564 { 00:22:50.564 "name": "BaseBdev2", 00:22:50.564 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:50.564 "is_configured": true, 00:22:50.564 "data_offset": 2048, 00:22:50.564 "data_size": 63488 00:22:50.564 }, 00:22:50.564 { 00:22:50.564 "name": "BaseBdev3", 00:22:50.564 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:50.564 "is_configured": true, 00:22:50.564 "data_offset": 2048, 00:22:50.564 "data_size": 63488 00:22:50.564 } 00:22:50.564 ] 00:22:50.564 }' 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.564 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:51.523 "name": "raid_bdev1", 00:22:51.523 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:51.523 "strip_size_kb": 64, 00:22:51.523 "state": "online", 00:22:51.523 "raid_level": "raid5f", 00:22:51.523 "superblock": true, 00:22:51.523 "num_base_bdevs": 3, 00:22:51.523 "num_base_bdevs_discovered": 3, 00:22:51.523 "num_base_bdevs_operational": 3, 00:22:51.523 "process": { 00:22:51.523 "type": "rebuild", 00:22:51.523 "target": "spare", 00:22:51.523 "progress": { 00:22:51.523 "blocks": 94208, 00:22:51.523 "percent": 74 00:22:51.523 } 00:22:51.523 }, 00:22:51.523 "base_bdevs_list": [ 00:22:51.523 { 00:22:51.523 "name": "spare", 00:22:51.523 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:51.523 "is_configured": true, 00:22:51.523 "data_offset": 2048, 00:22:51.523 "data_size": 63488 00:22:51.523 }, 00:22:51.523 { 00:22:51.523 "name": "BaseBdev2", 00:22:51.523 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:51.523 "is_configured": true, 00:22:51.523 "data_offset": 2048, 00:22:51.523 "data_size": 63488 00:22:51.523 }, 00:22:51.523 { 00:22:51.523 "name": "BaseBdev3", 00:22:51.523 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:51.523 "is_configured": true, 00:22:51.523 "data_offset": 2048, 00:22:51.523 "data_size": 63488 00:22:51.523 } 00:22:51.523 ] 00:22:51.523 }' 00:22:51.523 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:51.781 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.781 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:51.781 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.781 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.714 "name": "raid_bdev1", 00:22:52.714 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:52.714 "strip_size_kb": 64, 00:22:52.714 "state": "online", 00:22:52.714 "raid_level": "raid5f", 00:22:52.714 "superblock": true, 00:22:52.714 "num_base_bdevs": 3, 00:22:52.714 "num_base_bdevs_discovered": 3, 00:22:52.714 "num_base_bdevs_operational": 3, 00:22:52.714 "process": { 00:22:52.714 "type": "rebuild", 00:22:52.714 "target": "spare", 00:22:52.714 "progress": { 00:22:52.714 "blocks": 116736, 00:22:52.714 "percent": 91 00:22:52.714 } 00:22:52.714 }, 00:22:52.714 "base_bdevs_list": [ 00:22:52.714 { 00:22:52.714 "name": "spare", 00:22:52.714 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:52.714 "is_configured": true, 00:22:52.714 "data_offset": 2048, 00:22:52.714 "data_size": 63488 00:22:52.714 }, 00:22:52.714 { 00:22:52.714 "name": "BaseBdev2", 00:22:52.714 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:52.714 "is_configured": true, 00:22:52.714 "data_offset": 2048, 00:22:52.714 "data_size": 63488 00:22:52.714 }, 00:22:52.714 { 00:22:52.714 "name": "BaseBdev3", 00:22:52.714 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:52.714 "is_configured": true, 00:22:52.714 "data_offset": 2048, 00:22:52.714 "data_size": 63488 00:22:52.714 } 00:22:52.714 ] 00:22:52.714 }' 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.714 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.973 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.973 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:53.231 [2024-12-06 13:16:59.544165] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:53.231 [2024-12-06 13:16:59.544292] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:53.231 [2024-12-06 13:16:59.544525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.798 "name": "raid_bdev1", 00:22:53.798 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:53.798 "strip_size_kb": 64, 00:22:53.798 "state": "online", 00:22:53.798 "raid_level": "raid5f", 00:22:53.798 "superblock": true, 00:22:53.798 "num_base_bdevs": 3, 00:22:53.798 "num_base_bdevs_discovered": 3, 00:22:53.798 "num_base_bdevs_operational": 3, 00:22:53.798 "base_bdevs_list": [ 00:22:53.798 { 00:22:53.798 "name": "spare", 00:22:53.798 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:53.798 "is_configured": true, 00:22:53.798 "data_offset": 2048, 00:22:53.798 "data_size": 63488 00:22:53.798 }, 00:22:53.798 { 00:22:53.798 "name": "BaseBdev2", 00:22:53.798 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:53.798 "is_configured": true, 00:22:53.798 "data_offset": 2048, 00:22:53.798 "data_size": 63488 00:22:53.798 }, 00:22:53.798 { 00:22:53.798 "name": "BaseBdev3", 00:22:53.798 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:53.798 "is_configured": true, 00:22:53.798 "data_offset": 2048, 00:22:53.798 "data_size": 63488 00:22:53.798 } 00:22:53.798 ] 00:22:53.798 }' 00:22:53.798 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.104 "name": "raid_bdev1", 00:22:54.104 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:54.104 "strip_size_kb": 64, 00:22:54.104 "state": "online", 00:22:54.104 "raid_level": "raid5f", 00:22:54.104 "superblock": true, 00:22:54.104 "num_base_bdevs": 3, 00:22:54.104 "num_base_bdevs_discovered": 3, 00:22:54.104 "num_base_bdevs_operational": 3, 00:22:54.104 "base_bdevs_list": [ 00:22:54.104 { 00:22:54.104 "name": "spare", 00:22:54.104 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:54.104 "is_configured": true, 00:22:54.104 "data_offset": 2048, 00:22:54.104 "data_size": 63488 00:22:54.104 }, 00:22:54.104 { 00:22:54.104 "name": "BaseBdev2", 00:22:54.104 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:54.104 "is_configured": true, 00:22:54.104 "data_offset": 2048, 00:22:54.104 "data_size": 63488 00:22:54.104 }, 00:22:54.104 { 00:22:54.104 "name": "BaseBdev3", 00:22:54.104 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:54.104 "is_configured": true, 00:22:54.104 "data_offset": 2048, 00:22:54.104 "data_size": 63488 00:22:54.104 } 00:22:54.104 ] 00:22:54.104 }' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.104 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.363 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.363 "name": "raid_bdev1", 00:22:54.363 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:54.363 "strip_size_kb": 64, 00:22:54.363 "state": "online", 00:22:54.363 "raid_level": "raid5f", 00:22:54.363 "superblock": true, 00:22:54.363 "num_base_bdevs": 3, 00:22:54.363 "num_base_bdevs_discovered": 3, 00:22:54.363 "num_base_bdevs_operational": 3, 00:22:54.363 "base_bdevs_list": [ 00:22:54.363 { 00:22:54.363 "name": "spare", 00:22:54.363 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:54.363 "is_configured": true, 00:22:54.363 "data_offset": 2048, 00:22:54.363 "data_size": 63488 00:22:54.363 }, 00:22:54.363 { 00:22:54.363 "name": "BaseBdev2", 00:22:54.363 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:54.363 "is_configured": true, 00:22:54.363 "data_offset": 2048, 00:22:54.363 "data_size": 63488 00:22:54.363 }, 00:22:54.363 { 00:22:54.363 "name": "BaseBdev3", 00:22:54.363 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:54.363 "is_configured": true, 00:22:54.363 "data_offset": 2048, 00:22:54.363 "data_size": 63488 00:22:54.363 } 00:22:54.363 ] 00:22:54.363 }' 00:22:54.363 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.363 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.621 [2024-12-06 13:17:01.106139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.621 [2024-12-06 13:17:01.106179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.621 [2024-12-06 13:17:01.106328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.621 [2024-12-06 13:17:01.106469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.621 [2024-12-06 13:17:01.106677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:54.621 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.879 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:55.138 /dev/nbd0 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.138 1+0 records in 00:22:55.138 1+0 records out 00:22:55.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038618 s, 10.6 MB/s 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.138 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:55.396 /dev/nbd1 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.396 1+0 records in 00:22:55.396 1+0 records out 00:22:55.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569437 s, 7.2 MB/s 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.396 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.654 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.220 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.479 [2024-12-06 13:17:02.809934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:56.479 [2024-12-06 13:17:02.810046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.479 [2024-12-06 13:17:02.810114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:56.479 [2024-12-06 13:17:02.810134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.479 [2024-12-06 13:17:02.813501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.479 [2024-12-06 13:17:02.813549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:56.479 [2024-12-06 13:17:02.813719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:56.479 [2024-12-06 13:17:02.813796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.479 [2024-12-06 13:17:02.813997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.479 [2024-12-06 13:17:02.814208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.479 spare 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.479 [2024-12-06 13:17:02.914383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:56.479 [2024-12-06 13:17:02.914485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:56.479 [2024-12-06 13:17:02.915051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:22:56.479 [2024-12-06 13:17:02.919810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:56.479 [2024-12-06 13:17:02.919857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:56.479 [2024-12-06 13:17:02.920211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:56.479 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.480 "name": "raid_bdev1", 00:22:56.480 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:56.480 "strip_size_kb": 64, 00:22:56.480 "state": "online", 00:22:56.480 "raid_level": "raid5f", 00:22:56.480 "superblock": true, 00:22:56.480 "num_base_bdevs": 3, 00:22:56.480 "num_base_bdevs_discovered": 3, 00:22:56.480 "num_base_bdevs_operational": 3, 00:22:56.480 "base_bdevs_list": [ 00:22:56.480 { 00:22:56.480 "name": "spare", 00:22:56.480 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:56.480 "is_configured": true, 00:22:56.480 "data_offset": 2048, 00:22:56.480 "data_size": 63488 00:22:56.480 }, 00:22:56.480 { 00:22:56.480 "name": "BaseBdev2", 00:22:56.480 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:56.480 "is_configured": true, 00:22:56.480 "data_offset": 2048, 00:22:56.480 "data_size": 63488 00:22:56.480 }, 00:22:56.480 { 00:22:56.480 "name": "BaseBdev3", 00:22:56.480 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:56.480 "is_configured": true, 00:22:56.480 "data_offset": 2048, 00:22:56.480 "data_size": 63488 00:22:56.480 } 00:22:56.480 ] 00:22:56.480 }' 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.480 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.076 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.076 "name": "raid_bdev1", 00:22:57.076 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:57.076 "strip_size_kb": 64, 00:22:57.076 "state": "online", 00:22:57.076 "raid_level": "raid5f", 00:22:57.076 "superblock": true, 00:22:57.076 "num_base_bdevs": 3, 00:22:57.076 "num_base_bdevs_discovered": 3, 00:22:57.076 "num_base_bdevs_operational": 3, 00:22:57.076 "base_bdevs_list": [ 00:22:57.076 { 00:22:57.076 "name": "spare", 00:22:57.076 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:57.076 "is_configured": true, 00:22:57.076 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "name": "BaseBdev2", 00:22:57.077 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:57.077 "is_configured": true, 00:22:57.077 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 }, 00:22:57.077 { 00:22:57.077 "name": "BaseBdev3", 00:22:57.077 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:57.077 "is_configured": true, 00:22:57.077 "data_offset": 2048, 00:22:57.077 "data_size": 63488 00:22:57.077 } 00:22:57.077 ] 00:22:57.077 }' 00:22:57.077 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.077 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.077 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.335 [2024-12-06 13:17:03.662402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.335 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.336 "name": "raid_bdev1", 00:22:57.336 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:57.336 "strip_size_kb": 64, 00:22:57.336 "state": "online", 00:22:57.336 "raid_level": "raid5f", 00:22:57.336 "superblock": true, 00:22:57.336 "num_base_bdevs": 3, 00:22:57.336 "num_base_bdevs_discovered": 2, 00:22:57.336 "num_base_bdevs_operational": 2, 00:22:57.336 "base_bdevs_list": [ 00:22:57.336 { 00:22:57.336 "name": null, 00:22:57.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.336 "is_configured": false, 00:22:57.336 "data_offset": 0, 00:22:57.336 "data_size": 63488 00:22:57.336 }, 00:22:57.336 { 00:22:57.336 "name": "BaseBdev2", 00:22:57.336 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:57.336 "is_configured": true, 00:22:57.336 "data_offset": 2048, 00:22:57.336 "data_size": 63488 00:22:57.336 }, 00:22:57.336 { 00:22:57.336 "name": "BaseBdev3", 00:22:57.336 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:57.336 "is_configured": true, 00:22:57.336 "data_offset": 2048, 00:22:57.336 "data_size": 63488 00:22:57.336 } 00:22:57.336 ] 00:22:57.336 }' 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.336 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.903 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:57.903 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.903 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.903 [2024-12-06 13:17:04.194601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.903 [2024-12-06 13:17:04.194894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:57.903 [2024-12-06 13:17:04.194937] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:57.903 [2024-12-06 13:17:04.194991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.903 [2024-12-06 13:17:04.210698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:22:57.903 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.903 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:57.903 [2024-12-06 13:17:04.218625] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.853 "name": "raid_bdev1", 00:22:58.853 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:58.853 "strip_size_kb": 64, 00:22:58.853 "state": "online", 00:22:58.853 "raid_level": "raid5f", 00:22:58.853 "superblock": true, 00:22:58.853 "num_base_bdevs": 3, 00:22:58.853 "num_base_bdevs_discovered": 3, 00:22:58.853 "num_base_bdevs_operational": 3, 00:22:58.853 "process": { 00:22:58.853 "type": "rebuild", 00:22:58.853 "target": "spare", 00:22:58.853 "progress": { 00:22:58.853 "blocks": 18432, 00:22:58.853 "percent": 14 00:22:58.853 } 00:22:58.853 }, 00:22:58.853 "base_bdevs_list": [ 00:22:58.853 { 00:22:58.853 "name": "spare", 00:22:58.853 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:22:58.853 "is_configured": true, 00:22:58.853 "data_offset": 2048, 00:22:58.853 "data_size": 63488 00:22:58.853 }, 00:22:58.853 { 00:22:58.853 "name": "BaseBdev2", 00:22:58.853 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:58.853 "is_configured": true, 00:22:58.853 "data_offset": 2048, 00:22:58.853 "data_size": 63488 00:22:58.853 }, 00:22:58.853 { 00:22:58.853 "name": "BaseBdev3", 00:22:58.853 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:58.853 "is_configured": true, 00:22:58.853 "data_offset": 2048, 00:22:58.853 "data_size": 63488 00:22:58.853 } 00:22:58.853 ] 00:22:58.853 }' 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.853 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.110 [2024-12-06 13:17:05.384695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.110 [2024-12-06 13:17:05.436976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:59.110 [2024-12-06 13:17:05.437089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.110 [2024-12-06 13:17:05.437118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.110 [2024-12-06 13:17:05.437134] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.110 "name": "raid_bdev1", 00:22:59.110 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:22:59.110 "strip_size_kb": 64, 00:22:59.110 "state": "online", 00:22:59.110 "raid_level": "raid5f", 00:22:59.110 "superblock": true, 00:22:59.110 "num_base_bdevs": 3, 00:22:59.110 "num_base_bdevs_discovered": 2, 00:22:59.110 "num_base_bdevs_operational": 2, 00:22:59.110 "base_bdevs_list": [ 00:22:59.110 { 00:22:59.110 "name": null, 00:22:59.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.110 "is_configured": false, 00:22:59.110 "data_offset": 0, 00:22:59.110 "data_size": 63488 00:22:59.110 }, 00:22:59.110 { 00:22:59.110 "name": "BaseBdev2", 00:22:59.110 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:22:59.110 "is_configured": true, 00:22:59.110 "data_offset": 2048, 00:22:59.110 "data_size": 63488 00:22:59.110 }, 00:22:59.110 { 00:22:59.110 "name": "BaseBdev3", 00:22:59.110 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:22:59.110 "is_configured": true, 00:22:59.110 "data_offset": 2048, 00:22:59.110 "data_size": 63488 00:22:59.110 } 00:22:59.110 ] 00:22:59.110 }' 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.110 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.677 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:59.677 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.677 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.677 [2024-12-06 13:17:05.990889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.677 [2024-12-06 13:17:05.990986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.677 [2024-12-06 13:17:05.991022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:59.677 [2024-12-06 13:17:05.991045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.677 [2024-12-06 13:17:05.991781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.677 [2024-12-06 13:17:05.991830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.677 [2024-12-06 13:17:05.991980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:59.677 [2024-12-06 13:17:05.992010] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:59.677 [2024-12-06 13:17:05.992027] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:59.677 [2024-12-06 13:17:05.992068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.677 [2024-12-06 13:17:06.007241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:22:59.677 spare 00:22:59.677 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.677 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:59.677 [2024-12-06 13:17:06.014709] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:00.612 "name": "raid_bdev1", 00:23:00.612 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:00.612 "strip_size_kb": 64, 00:23:00.612 "state": "online", 00:23:00.612 "raid_level": "raid5f", 00:23:00.612 "superblock": true, 00:23:00.612 "num_base_bdevs": 3, 00:23:00.612 "num_base_bdevs_discovered": 3, 00:23:00.612 "num_base_bdevs_operational": 3, 00:23:00.612 "process": { 00:23:00.612 "type": "rebuild", 00:23:00.612 "target": "spare", 00:23:00.612 "progress": { 00:23:00.612 "blocks": 18432, 00:23:00.612 "percent": 14 00:23:00.612 } 00:23:00.612 }, 00:23:00.612 "base_bdevs_list": [ 00:23:00.612 { 00:23:00.612 "name": "spare", 00:23:00.612 "uuid": "0e53e89b-94e9-5400-a376-6dea76eba9b2", 00:23:00.612 "is_configured": true, 00:23:00.612 "data_offset": 2048, 00:23:00.612 "data_size": 63488 00:23:00.612 }, 00:23:00.612 { 00:23:00.612 "name": "BaseBdev2", 00:23:00.612 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:00.612 "is_configured": true, 00:23:00.612 "data_offset": 2048, 00:23:00.612 "data_size": 63488 00:23:00.612 }, 00:23:00.612 { 00:23:00.612 "name": "BaseBdev3", 00:23:00.612 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:00.612 "is_configured": true, 00:23:00.612 "data_offset": 2048, 00:23:00.612 "data_size": 63488 00:23:00.612 } 00:23:00.612 ] 00:23:00.612 }' 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.612 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.871 [2024-12-06 13:17:07.185303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.871 [2024-12-06 13:17:07.233314] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:00.871 [2024-12-06 13:17:07.233424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.871 [2024-12-06 13:17:07.233472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:00.871 [2024-12-06 13:17:07.233487] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.871 "name": "raid_bdev1", 00:23:00.871 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:00.871 "strip_size_kb": 64, 00:23:00.871 "state": "online", 00:23:00.871 "raid_level": "raid5f", 00:23:00.871 "superblock": true, 00:23:00.871 "num_base_bdevs": 3, 00:23:00.871 "num_base_bdevs_discovered": 2, 00:23:00.871 "num_base_bdevs_operational": 2, 00:23:00.871 "base_bdevs_list": [ 00:23:00.871 { 00:23:00.871 "name": null, 00:23:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.871 "is_configured": false, 00:23:00.871 "data_offset": 0, 00:23:00.871 "data_size": 63488 00:23:00.871 }, 00:23:00.871 { 00:23:00.871 "name": "BaseBdev2", 00:23:00.871 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:00.871 "is_configured": true, 00:23:00.871 "data_offset": 2048, 00:23:00.871 "data_size": 63488 00:23:00.871 }, 00:23:00.871 { 00:23:00.871 "name": "BaseBdev3", 00:23:00.871 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:00.871 "is_configured": true, 00:23:00.871 "data_offset": 2048, 00:23:00.871 "data_size": 63488 00:23:00.871 } 00:23:00.871 ] 00:23:00.871 }' 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.871 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.438 "name": "raid_bdev1", 00:23:01.438 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:01.438 "strip_size_kb": 64, 00:23:01.438 "state": "online", 00:23:01.438 "raid_level": "raid5f", 00:23:01.438 "superblock": true, 00:23:01.438 "num_base_bdevs": 3, 00:23:01.438 "num_base_bdevs_discovered": 2, 00:23:01.438 "num_base_bdevs_operational": 2, 00:23:01.438 "base_bdevs_list": [ 00:23:01.438 { 00:23:01.438 "name": null, 00:23:01.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.438 "is_configured": false, 00:23:01.438 "data_offset": 0, 00:23:01.438 "data_size": 63488 00:23:01.438 }, 00:23:01.438 { 00:23:01.438 "name": "BaseBdev2", 00:23:01.438 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:01.438 "is_configured": true, 00:23:01.438 "data_offset": 2048, 00:23:01.438 "data_size": 63488 00:23:01.438 }, 00:23:01.438 { 00:23:01.438 "name": "BaseBdev3", 00:23:01.438 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:01.438 "is_configured": true, 00:23:01.438 "data_offset": 2048, 00:23:01.438 "data_size": 63488 00:23:01.438 } 00:23:01.438 ] 00:23:01.438 }' 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.438 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 [2024-12-06 13:17:07.981926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:01.697 [2024-12-06 13:17:07.982011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.697 [2024-12-06 13:17:07.982053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:01.697 [2024-12-06 13:17:07.982068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.697 [2024-12-06 13:17:07.982707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.697 [2024-12-06 13:17:07.982758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:01.697 [2024-12-06 13:17:07.982875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:01.697 [2024-12-06 13:17:07.982902] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:01.697 [2024-12-06 13:17:07.982930] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:01.697 [2024-12-06 13:17:07.982946] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:01.697 BaseBdev1 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.697 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.666 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.666 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.666 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.666 "name": "raid_bdev1", 00:23:02.666 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:02.666 "strip_size_kb": 64, 00:23:02.666 "state": "online", 00:23:02.666 "raid_level": "raid5f", 00:23:02.666 "superblock": true, 00:23:02.666 "num_base_bdevs": 3, 00:23:02.667 "num_base_bdevs_discovered": 2, 00:23:02.667 "num_base_bdevs_operational": 2, 00:23:02.667 "base_bdevs_list": [ 00:23:02.667 { 00:23:02.667 "name": null, 00:23:02.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.667 "is_configured": false, 00:23:02.667 "data_offset": 0, 00:23:02.667 "data_size": 63488 00:23:02.667 }, 00:23:02.667 { 00:23:02.667 "name": "BaseBdev2", 00:23:02.667 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:02.667 "is_configured": true, 00:23:02.667 "data_offset": 2048, 00:23:02.667 "data_size": 63488 00:23:02.667 }, 00:23:02.667 { 00:23:02.667 "name": "BaseBdev3", 00:23:02.667 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:02.667 "is_configured": true, 00:23:02.667 "data_offset": 2048, 00:23:02.667 "data_size": 63488 00:23:02.667 } 00:23:02.667 ] 00:23:02.667 }' 00:23:02.667 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.667 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.235 "name": "raid_bdev1", 00:23:03.235 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:03.235 "strip_size_kb": 64, 00:23:03.235 "state": "online", 00:23:03.235 "raid_level": "raid5f", 00:23:03.235 "superblock": true, 00:23:03.235 "num_base_bdevs": 3, 00:23:03.235 "num_base_bdevs_discovered": 2, 00:23:03.235 "num_base_bdevs_operational": 2, 00:23:03.235 "base_bdevs_list": [ 00:23:03.235 { 00:23:03.235 "name": null, 00:23:03.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.235 "is_configured": false, 00:23:03.235 "data_offset": 0, 00:23:03.235 "data_size": 63488 00:23:03.235 }, 00:23:03.235 { 00:23:03.235 "name": "BaseBdev2", 00:23:03.235 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:03.235 "is_configured": true, 00:23:03.235 "data_offset": 2048, 00:23:03.235 "data_size": 63488 00:23:03.235 }, 00:23:03.235 { 00:23:03.235 "name": "BaseBdev3", 00:23:03.235 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:03.235 "is_configured": true, 00:23:03.235 "data_offset": 2048, 00:23:03.235 "data_size": 63488 00:23:03.235 } 00:23:03.235 ] 00:23:03.235 }' 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.235 [2024-12-06 13:17:09.698535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.235 [2024-12-06 13:17:09.698765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.235 [2024-12-06 13:17:09.698806] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:03.235 request: 00:23:03.235 { 00:23:03.235 "base_bdev": "BaseBdev1", 00:23:03.235 "raid_bdev": "raid_bdev1", 00:23:03.235 "method": "bdev_raid_add_base_bdev", 00:23:03.235 "req_id": 1 00:23:03.235 } 00:23:03.235 Got JSON-RPC error response 00:23:03.235 response: 00:23:03.235 { 00:23:03.235 "code": -22, 00:23:03.235 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:03.235 } 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.235 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.612 "name": "raid_bdev1", 00:23:04.612 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:04.612 "strip_size_kb": 64, 00:23:04.612 "state": "online", 00:23:04.612 "raid_level": "raid5f", 00:23:04.612 "superblock": true, 00:23:04.612 "num_base_bdevs": 3, 00:23:04.612 "num_base_bdevs_discovered": 2, 00:23:04.612 "num_base_bdevs_operational": 2, 00:23:04.612 "base_bdevs_list": [ 00:23:04.612 { 00:23:04.612 "name": null, 00:23:04.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.612 "is_configured": false, 00:23:04.612 "data_offset": 0, 00:23:04.612 "data_size": 63488 00:23:04.612 }, 00:23:04.612 { 00:23:04.612 "name": "BaseBdev2", 00:23:04.612 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:04.612 "is_configured": true, 00:23:04.612 "data_offset": 2048, 00:23:04.612 "data_size": 63488 00:23:04.612 }, 00:23:04.612 { 00:23:04.612 "name": "BaseBdev3", 00:23:04.612 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:04.612 "is_configured": true, 00:23:04.612 "data_offset": 2048, 00:23:04.612 "data_size": 63488 00:23:04.612 } 00:23:04.612 ] 00:23:04.612 }' 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.612 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.871 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.871 "name": "raid_bdev1", 00:23:04.871 "uuid": "1ed992bb-1b8b-44fa-81d0-49014701866f", 00:23:04.871 "strip_size_kb": 64, 00:23:04.871 "state": "online", 00:23:04.871 "raid_level": "raid5f", 00:23:04.871 "superblock": true, 00:23:04.871 "num_base_bdevs": 3, 00:23:04.871 "num_base_bdevs_discovered": 2, 00:23:04.871 "num_base_bdevs_operational": 2, 00:23:04.871 "base_bdevs_list": [ 00:23:04.871 { 00:23:04.871 "name": null, 00:23:04.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.871 "is_configured": false, 00:23:04.871 "data_offset": 0, 00:23:04.871 "data_size": 63488 00:23:04.871 }, 00:23:04.871 { 00:23:04.871 "name": "BaseBdev2", 00:23:04.871 "uuid": "b5f304e3-ded4-5aaf-af6e-3cd64d343c7a", 00:23:04.871 "is_configured": true, 00:23:04.871 "data_offset": 2048, 00:23:04.871 "data_size": 63488 00:23:04.871 }, 00:23:04.871 { 00:23:04.871 "name": "BaseBdev3", 00:23:04.871 "uuid": "5562651f-04b9-5328-933d-4165a2e2e931", 00:23:04.871 "is_configured": true, 00:23:04.871 "data_offset": 2048, 00:23:04.871 "data_size": 63488 00:23:04.871 } 00:23:04.871 ] 00:23:04.871 }' 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82734 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82734 ']' 00:23:04.872 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82734 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82734 00:23:05.130 killing process with pid 82734 00:23:05.130 Received shutdown signal, test time was about 60.000000 seconds 00:23:05.130 00:23:05.130 Latency(us) 00:23:05.130 [2024-12-06T13:17:11.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.130 [2024-12-06T13:17:11.659Z] =================================================================================================================== 00:23:05.130 [2024-12-06T13:17:11.659Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82734' 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82734 00:23:05.130 [2024-12-06 13:17:11.429959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.130 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82734 00:23:05.130 [2024-12-06 13:17:11.430119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.131 [2024-12-06 13:17:11.430208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.131 [2024-12-06 13:17:11.430230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:05.389 [2024-12-06 13:17:11.790084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.768 ************************************ 00:23:06.768 END TEST raid5f_rebuild_test_sb 00:23:06.768 ************************************ 00:23:06.768 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:06.768 00:23:06.768 real 0m25.300s 00:23:06.768 user 0m33.720s 00:23:06.768 sys 0m2.769s 00:23:06.768 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.768 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.768 13:17:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:23:06.768 13:17:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:23:06.768 13:17:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:06.768 13:17:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.768 13:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:06.768 ************************************ 00:23:06.768 START TEST raid5f_state_function_test 00:23:06.768 ************************************ 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83501 00:23:06.768 Process raid pid: 83501 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83501' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83501 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83501 ']' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.768 13:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.768 [2024-12-06 13:17:13.048531] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:06.768 [2024-12-06 13:17:13.048742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.768 [2024-12-06 13:17:13.240151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.092 [2024-12-06 13:17:13.386296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.351 [2024-12-06 13:17:13.612294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.351 [2024-12-06 13:17:13.612364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.608 [2024-12-06 13:17:13.953924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.608 [2024-12-06 13:17:13.954014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.608 [2024-12-06 13:17:13.954032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.608 [2024-12-06 13:17:13.954049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.608 [2024-12-06 13:17:13.954059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:07.608 [2024-12-06 13:17:13.954073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:07.608 [2024-12-06 13:17:13.954083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:07.608 [2024-12-06 13:17:13.954097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.608 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.608 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.608 "name": "Existed_Raid", 00:23:07.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.608 "strip_size_kb": 64, 00:23:07.608 "state": "configuring", 00:23:07.608 "raid_level": "raid5f", 00:23:07.608 "superblock": false, 00:23:07.608 "num_base_bdevs": 4, 00:23:07.608 "num_base_bdevs_discovered": 0, 00:23:07.608 "num_base_bdevs_operational": 4, 00:23:07.608 "base_bdevs_list": [ 00:23:07.608 { 00:23:07.608 "name": "BaseBdev1", 00:23:07.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.608 "is_configured": false, 00:23:07.608 "data_offset": 0, 00:23:07.608 "data_size": 0 00:23:07.608 }, 00:23:07.608 { 00:23:07.608 "name": "BaseBdev2", 00:23:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.609 "is_configured": false, 00:23:07.609 "data_offset": 0, 00:23:07.609 "data_size": 0 00:23:07.609 }, 00:23:07.609 { 00:23:07.609 "name": "BaseBdev3", 00:23:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.609 "is_configured": false, 00:23:07.609 "data_offset": 0, 00:23:07.609 "data_size": 0 00:23:07.609 }, 00:23:07.609 { 00:23:07.609 "name": "BaseBdev4", 00:23:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.609 "is_configured": false, 00:23:07.609 "data_offset": 0, 00:23:07.609 "data_size": 0 00:23:07.609 } 00:23:07.609 ] 00:23:07.609 }' 00:23:07.609 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.609 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.173 [2024-12-06 13:17:14.490033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:08.173 [2024-12-06 13:17:14.490105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.173 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.173 [2024-12-06 13:17:14.498023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:08.174 [2024-12-06 13:17:14.498099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:08.174 [2024-12-06 13:17:14.498115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.174 [2024-12-06 13:17:14.498131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.174 [2024-12-06 13:17:14.498141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:08.174 [2024-12-06 13:17:14.498157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:08.174 [2024-12-06 13:17:14.498174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:08.174 [2024-12-06 13:17:14.498188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.174 [2024-12-06 13:17:14.547237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.174 BaseBdev1 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.174 [ 00:23:08.174 { 00:23:08.174 "name": "BaseBdev1", 00:23:08.174 "aliases": [ 00:23:08.174 "67d3050d-841b-4f12-9cba-d51b612587d2" 00:23:08.174 ], 00:23:08.174 "product_name": "Malloc disk", 00:23:08.174 "block_size": 512, 00:23:08.174 "num_blocks": 65536, 00:23:08.174 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:08.174 "assigned_rate_limits": { 00:23:08.174 "rw_ios_per_sec": 0, 00:23:08.174 "rw_mbytes_per_sec": 0, 00:23:08.174 "r_mbytes_per_sec": 0, 00:23:08.174 "w_mbytes_per_sec": 0 00:23:08.174 }, 00:23:08.174 "claimed": true, 00:23:08.174 "claim_type": "exclusive_write", 00:23:08.174 "zoned": false, 00:23:08.174 "supported_io_types": { 00:23:08.174 "read": true, 00:23:08.174 "write": true, 00:23:08.174 "unmap": true, 00:23:08.174 "flush": true, 00:23:08.174 "reset": true, 00:23:08.174 "nvme_admin": false, 00:23:08.174 "nvme_io": false, 00:23:08.174 "nvme_io_md": false, 00:23:08.174 "write_zeroes": true, 00:23:08.174 "zcopy": true, 00:23:08.174 "get_zone_info": false, 00:23:08.174 "zone_management": false, 00:23:08.174 "zone_append": false, 00:23:08.174 "compare": false, 00:23:08.174 "compare_and_write": false, 00:23:08.174 "abort": true, 00:23:08.174 "seek_hole": false, 00:23:08.174 "seek_data": false, 00:23:08.174 "copy": true, 00:23:08.174 "nvme_iov_md": false 00:23:08.174 }, 00:23:08.174 "memory_domains": [ 00:23:08.174 { 00:23:08.174 "dma_device_id": "system", 00:23:08.174 "dma_device_type": 1 00:23:08.174 }, 00:23:08.174 { 00:23:08.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.174 "dma_device_type": 2 00:23:08.174 } 00:23:08.174 ], 00:23:08.174 "driver_specific": {} 00:23:08.174 } 00:23:08.174 ] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.174 "name": "Existed_Raid", 00:23:08.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.174 "strip_size_kb": 64, 00:23:08.174 "state": "configuring", 00:23:08.174 "raid_level": "raid5f", 00:23:08.174 "superblock": false, 00:23:08.174 "num_base_bdevs": 4, 00:23:08.174 "num_base_bdevs_discovered": 1, 00:23:08.174 "num_base_bdevs_operational": 4, 00:23:08.174 "base_bdevs_list": [ 00:23:08.174 { 00:23:08.174 "name": "BaseBdev1", 00:23:08.174 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:08.174 "is_configured": true, 00:23:08.174 "data_offset": 0, 00:23:08.174 "data_size": 65536 00:23:08.174 }, 00:23:08.174 { 00:23:08.174 "name": "BaseBdev2", 00:23:08.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.174 "is_configured": false, 00:23:08.174 "data_offset": 0, 00:23:08.174 "data_size": 0 00:23:08.174 }, 00:23:08.174 { 00:23:08.174 "name": "BaseBdev3", 00:23:08.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.174 "is_configured": false, 00:23:08.174 "data_offset": 0, 00:23:08.174 "data_size": 0 00:23:08.174 }, 00:23:08.174 { 00:23:08.174 "name": "BaseBdev4", 00:23:08.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.174 "is_configured": false, 00:23:08.174 "data_offset": 0, 00:23:08.174 "data_size": 0 00:23:08.174 } 00:23:08.174 ] 00:23:08.174 }' 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.174 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 [2024-12-06 13:17:15.123511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:08.788 [2024-12-06 13:17:15.123596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 [2024-12-06 13:17:15.131556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.788 [2024-12-06 13:17:15.134137] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.788 [2024-12-06 13:17:15.134195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.788 [2024-12-06 13:17:15.134211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:08.788 [2024-12-06 13:17:15.134229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:08.788 [2024-12-06 13:17:15.134250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:08.788 [2024-12-06 13:17:15.134265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.788 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.788 "name": "Existed_Raid", 00:23:08.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.788 "strip_size_kb": 64, 00:23:08.788 "state": "configuring", 00:23:08.788 "raid_level": "raid5f", 00:23:08.788 "superblock": false, 00:23:08.788 "num_base_bdevs": 4, 00:23:08.788 "num_base_bdevs_discovered": 1, 00:23:08.788 "num_base_bdevs_operational": 4, 00:23:08.788 "base_bdevs_list": [ 00:23:08.788 { 00:23:08.788 "name": "BaseBdev1", 00:23:08.788 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:08.788 "is_configured": true, 00:23:08.788 "data_offset": 0, 00:23:08.788 "data_size": 65536 00:23:08.788 }, 00:23:08.788 { 00:23:08.788 "name": "BaseBdev2", 00:23:08.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.789 "is_configured": false, 00:23:08.789 "data_offset": 0, 00:23:08.789 "data_size": 0 00:23:08.789 }, 00:23:08.789 { 00:23:08.789 "name": "BaseBdev3", 00:23:08.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.789 "is_configured": false, 00:23:08.789 "data_offset": 0, 00:23:08.789 "data_size": 0 00:23:08.789 }, 00:23:08.789 { 00:23:08.789 "name": "BaseBdev4", 00:23:08.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.789 "is_configured": false, 00:23:08.789 "data_offset": 0, 00:23:08.789 "data_size": 0 00:23:08.789 } 00:23:08.789 ] 00:23:08.789 }' 00:23:08.789 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.789 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.352 [2024-12-06 13:17:15.701982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:09.352 BaseBdev2 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.352 [ 00:23:09.352 { 00:23:09.352 "name": "BaseBdev2", 00:23:09.352 "aliases": [ 00:23:09.352 "9284dcc3-5725-4fba-8937-1c84bfcd70ce" 00:23:09.352 ], 00:23:09.352 "product_name": "Malloc disk", 00:23:09.352 "block_size": 512, 00:23:09.352 "num_blocks": 65536, 00:23:09.352 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:09.352 "assigned_rate_limits": { 00:23:09.352 "rw_ios_per_sec": 0, 00:23:09.352 "rw_mbytes_per_sec": 0, 00:23:09.352 "r_mbytes_per_sec": 0, 00:23:09.352 "w_mbytes_per_sec": 0 00:23:09.352 }, 00:23:09.352 "claimed": true, 00:23:09.352 "claim_type": "exclusive_write", 00:23:09.352 "zoned": false, 00:23:09.352 "supported_io_types": { 00:23:09.352 "read": true, 00:23:09.352 "write": true, 00:23:09.352 "unmap": true, 00:23:09.352 "flush": true, 00:23:09.352 "reset": true, 00:23:09.352 "nvme_admin": false, 00:23:09.352 "nvme_io": false, 00:23:09.352 "nvme_io_md": false, 00:23:09.352 "write_zeroes": true, 00:23:09.352 "zcopy": true, 00:23:09.352 "get_zone_info": false, 00:23:09.352 "zone_management": false, 00:23:09.352 "zone_append": false, 00:23:09.352 "compare": false, 00:23:09.352 "compare_and_write": false, 00:23:09.352 "abort": true, 00:23:09.352 "seek_hole": false, 00:23:09.352 "seek_data": false, 00:23:09.352 "copy": true, 00:23:09.352 "nvme_iov_md": false 00:23:09.352 }, 00:23:09.352 "memory_domains": [ 00:23:09.352 { 00:23:09.352 "dma_device_id": "system", 00:23:09.352 "dma_device_type": 1 00:23:09.352 }, 00:23:09.352 { 00:23:09.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.352 "dma_device_type": 2 00:23:09.352 } 00:23:09.352 ], 00:23:09.352 "driver_specific": {} 00:23:09.352 } 00:23:09.352 ] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.352 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.352 "name": "Existed_Raid", 00:23:09.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.352 "strip_size_kb": 64, 00:23:09.352 "state": "configuring", 00:23:09.352 "raid_level": "raid5f", 00:23:09.352 "superblock": false, 00:23:09.352 "num_base_bdevs": 4, 00:23:09.352 "num_base_bdevs_discovered": 2, 00:23:09.352 "num_base_bdevs_operational": 4, 00:23:09.352 "base_bdevs_list": [ 00:23:09.352 { 00:23:09.352 "name": "BaseBdev1", 00:23:09.352 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:09.352 "is_configured": true, 00:23:09.352 "data_offset": 0, 00:23:09.353 "data_size": 65536 00:23:09.353 }, 00:23:09.353 { 00:23:09.353 "name": "BaseBdev2", 00:23:09.353 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:09.353 "is_configured": true, 00:23:09.353 "data_offset": 0, 00:23:09.353 "data_size": 65536 00:23:09.353 }, 00:23:09.353 { 00:23:09.353 "name": "BaseBdev3", 00:23:09.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.353 "is_configured": false, 00:23:09.353 "data_offset": 0, 00:23:09.353 "data_size": 0 00:23:09.353 }, 00:23:09.353 { 00:23:09.353 "name": "BaseBdev4", 00:23:09.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.353 "is_configured": false, 00:23:09.353 "data_offset": 0, 00:23:09.353 "data_size": 0 00:23:09.353 } 00:23:09.353 ] 00:23:09.353 }' 00:23:09.353 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.353 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.919 [2024-12-06 13:17:16.308035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.919 BaseBdev3 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.919 [ 00:23:09.919 { 00:23:09.919 "name": "BaseBdev3", 00:23:09.919 "aliases": [ 00:23:09.919 "d864846d-d3c0-4332-b59e-60c205d70692" 00:23:09.919 ], 00:23:09.919 "product_name": "Malloc disk", 00:23:09.919 "block_size": 512, 00:23:09.919 "num_blocks": 65536, 00:23:09.919 "uuid": "d864846d-d3c0-4332-b59e-60c205d70692", 00:23:09.919 "assigned_rate_limits": { 00:23:09.919 "rw_ios_per_sec": 0, 00:23:09.919 "rw_mbytes_per_sec": 0, 00:23:09.919 "r_mbytes_per_sec": 0, 00:23:09.919 "w_mbytes_per_sec": 0 00:23:09.919 }, 00:23:09.919 "claimed": true, 00:23:09.919 "claim_type": "exclusive_write", 00:23:09.919 "zoned": false, 00:23:09.919 "supported_io_types": { 00:23:09.919 "read": true, 00:23:09.919 "write": true, 00:23:09.919 "unmap": true, 00:23:09.919 "flush": true, 00:23:09.919 "reset": true, 00:23:09.919 "nvme_admin": false, 00:23:09.919 "nvme_io": false, 00:23:09.919 "nvme_io_md": false, 00:23:09.919 "write_zeroes": true, 00:23:09.919 "zcopy": true, 00:23:09.919 "get_zone_info": false, 00:23:09.919 "zone_management": false, 00:23:09.919 "zone_append": false, 00:23:09.919 "compare": false, 00:23:09.919 "compare_and_write": false, 00:23:09.919 "abort": true, 00:23:09.919 "seek_hole": false, 00:23:09.919 "seek_data": false, 00:23:09.919 "copy": true, 00:23:09.919 "nvme_iov_md": false 00:23:09.919 }, 00:23:09.919 "memory_domains": [ 00:23:09.919 { 00:23:09.919 "dma_device_id": "system", 00:23:09.919 "dma_device_type": 1 00:23:09.919 }, 00:23:09.919 { 00:23:09.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.919 "dma_device_type": 2 00:23:09.919 } 00:23:09.919 ], 00:23:09.919 "driver_specific": {} 00:23:09.919 } 00:23:09.919 ] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.919 "name": "Existed_Raid", 00:23:09.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.919 "strip_size_kb": 64, 00:23:09.919 "state": "configuring", 00:23:09.919 "raid_level": "raid5f", 00:23:09.919 "superblock": false, 00:23:09.919 "num_base_bdevs": 4, 00:23:09.919 "num_base_bdevs_discovered": 3, 00:23:09.919 "num_base_bdevs_operational": 4, 00:23:09.919 "base_bdevs_list": [ 00:23:09.919 { 00:23:09.919 "name": "BaseBdev1", 00:23:09.919 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:09.919 "is_configured": true, 00:23:09.919 "data_offset": 0, 00:23:09.919 "data_size": 65536 00:23:09.919 }, 00:23:09.919 { 00:23:09.919 "name": "BaseBdev2", 00:23:09.919 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:09.919 "is_configured": true, 00:23:09.919 "data_offset": 0, 00:23:09.919 "data_size": 65536 00:23:09.919 }, 00:23:09.919 { 00:23:09.919 "name": "BaseBdev3", 00:23:09.919 "uuid": "d864846d-d3c0-4332-b59e-60c205d70692", 00:23:09.919 "is_configured": true, 00:23:09.919 "data_offset": 0, 00:23:09.919 "data_size": 65536 00:23:09.919 }, 00:23:09.919 { 00:23:09.919 "name": "BaseBdev4", 00:23:09.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.919 "is_configured": false, 00:23:09.919 "data_offset": 0, 00:23:09.919 "data_size": 0 00:23:09.919 } 00:23:09.919 ] 00:23:09.919 }' 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.919 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.486 [2024-12-06 13:17:16.866978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:10.486 [2024-12-06 13:17:16.867080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:10.486 [2024-12-06 13:17:16.867096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:10.486 [2024-12-06 13:17:16.867440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:10.486 [2024-12-06 13:17:16.874383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:10.486 [2024-12-06 13:17:16.874417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:10.486 [2024-12-06 13:17:16.874807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.486 BaseBdev4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.486 [ 00:23:10.486 { 00:23:10.486 "name": "BaseBdev4", 00:23:10.486 "aliases": [ 00:23:10.486 "44f64c17-6a60-4497-b526-335348e028b6" 00:23:10.486 ], 00:23:10.486 "product_name": "Malloc disk", 00:23:10.486 "block_size": 512, 00:23:10.486 "num_blocks": 65536, 00:23:10.486 "uuid": "44f64c17-6a60-4497-b526-335348e028b6", 00:23:10.486 "assigned_rate_limits": { 00:23:10.486 "rw_ios_per_sec": 0, 00:23:10.486 "rw_mbytes_per_sec": 0, 00:23:10.486 "r_mbytes_per_sec": 0, 00:23:10.486 "w_mbytes_per_sec": 0 00:23:10.486 }, 00:23:10.486 "claimed": true, 00:23:10.486 "claim_type": "exclusive_write", 00:23:10.486 "zoned": false, 00:23:10.486 "supported_io_types": { 00:23:10.486 "read": true, 00:23:10.486 "write": true, 00:23:10.486 "unmap": true, 00:23:10.486 "flush": true, 00:23:10.486 "reset": true, 00:23:10.486 "nvme_admin": false, 00:23:10.486 "nvme_io": false, 00:23:10.486 "nvme_io_md": false, 00:23:10.486 "write_zeroes": true, 00:23:10.486 "zcopy": true, 00:23:10.486 "get_zone_info": false, 00:23:10.486 "zone_management": false, 00:23:10.486 "zone_append": false, 00:23:10.486 "compare": false, 00:23:10.486 "compare_and_write": false, 00:23:10.486 "abort": true, 00:23:10.486 "seek_hole": false, 00:23:10.486 "seek_data": false, 00:23:10.486 "copy": true, 00:23:10.486 "nvme_iov_md": false 00:23:10.486 }, 00:23:10.486 "memory_domains": [ 00:23:10.486 { 00:23:10.486 "dma_device_id": "system", 00:23:10.486 "dma_device_type": 1 00:23:10.486 }, 00:23:10.486 { 00:23:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.486 "dma_device_type": 2 00:23:10.486 } 00:23:10.486 ], 00:23:10.486 "driver_specific": {} 00:23:10.486 } 00:23:10.486 ] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.486 "name": "Existed_Raid", 00:23:10.486 "uuid": "9f96466d-56b9-4f4c-b8e0-cba28ba696e3", 00:23:10.486 "strip_size_kb": 64, 00:23:10.486 "state": "online", 00:23:10.486 "raid_level": "raid5f", 00:23:10.486 "superblock": false, 00:23:10.486 "num_base_bdevs": 4, 00:23:10.486 "num_base_bdevs_discovered": 4, 00:23:10.486 "num_base_bdevs_operational": 4, 00:23:10.486 "base_bdevs_list": [ 00:23:10.486 { 00:23:10.486 "name": "BaseBdev1", 00:23:10.486 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:10.486 "is_configured": true, 00:23:10.486 "data_offset": 0, 00:23:10.486 "data_size": 65536 00:23:10.486 }, 00:23:10.486 { 00:23:10.486 "name": "BaseBdev2", 00:23:10.486 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:10.486 "is_configured": true, 00:23:10.486 "data_offset": 0, 00:23:10.486 "data_size": 65536 00:23:10.486 }, 00:23:10.486 { 00:23:10.486 "name": "BaseBdev3", 00:23:10.486 "uuid": "d864846d-d3c0-4332-b59e-60c205d70692", 00:23:10.486 "is_configured": true, 00:23:10.486 "data_offset": 0, 00:23:10.486 "data_size": 65536 00:23:10.486 }, 00:23:10.486 { 00:23:10.486 "name": "BaseBdev4", 00:23:10.486 "uuid": "44f64c17-6a60-4497-b526-335348e028b6", 00:23:10.486 "is_configured": true, 00:23:10.486 "data_offset": 0, 00:23:10.486 "data_size": 65536 00:23:10.486 } 00:23:10.486 ] 00:23:10.486 }' 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.486 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:11.054 [2024-12-06 13:17:17.443372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:11.054 "name": "Existed_Raid", 00:23:11.054 "aliases": [ 00:23:11.054 "9f96466d-56b9-4f4c-b8e0-cba28ba696e3" 00:23:11.054 ], 00:23:11.054 "product_name": "Raid Volume", 00:23:11.054 "block_size": 512, 00:23:11.054 "num_blocks": 196608, 00:23:11.054 "uuid": "9f96466d-56b9-4f4c-b8e0-cba28ba696e3", 00:23:11.054 "assigned_rate_limits": { 00:23:11.054 "rw_ios_per_sec": 0, 00:23:11.054 "rw_mbytes_per_sec": 0, 00:23:11.054 "r_mbytes_per_sec": 0, 00:23:11.054 "w_mbytes_per_sec": 0 00:23:11.054 }, 00:23:11.054 "claimed": false, 00:23:11.054 "zoned": false, 00:23:11.054 "supported_io_types": { 00:23:11.054 "read": true, 00:23:11.054 "write": true, 00:23:11.054 "unmap": false, 00:23:11.054 "flush": false, 00:23:11.054 "reset": true, 00:23:11.054 "nvme_admin": false, 00:23:11.054 "nvme_io": false, 00:23:11.054 "nvme_io_md": false, 00:23:11.054 "write_zeroes": true, 00:23:11.054 "zcopy": false, 00:23:11.054 "get_zone_info": false, 00:23:11.054 "zone_management": false, 00:23:11.054 "zone_append": false, 00:23:11.054 "compare": false, 00:23:11.054 "compare_and_write": false, 00:23:11.054 "abort": false, 00:23:11.054 "seek_hole": false, 00:23:11.054 "seek_data": false, 00:23:11.054 "copy": false, 00:23:11.054 "nvme_iov_md": false 00:23:11.054 }, 00:23:11.054 "driver_specific": { 00:23:11.054 "raid": { 00:23:11.054 "uuid": "9f96466d-56b9-4f4c-b8e0-cba28ba696e3", 00:23:11.054 "strip_size_kb": 64, 00:23:11.054 "state": "online", 00:23:11.054 "raid_level": "raid5f", 00:23:11.054 "superblock": false, 00:23:11.054 "num_base_bdevs": 4, 00:23:11.054 "num_base_bdevs_discovered": 4, 00:23:11.054 "num_base_bdevs_operational": 4, 00:23:11.054 "base_bdevs_list": [ 00:23:11.054 { 00:23:11.054 "name": "BaseBdev1", 00:23:11.054 "uuid": "67d3050d-841b-4f12-9cba-d51b612587d2", 00:23:11.054 "is_configured": true, 00:23:11.054 "data_offset": 0, 00:23:11.054 "data_size": 65536 00:23:11.054 }, 00:23:11.054 { 00:23:11.054 "name": "BaseBdev2", 00:23:11.054 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:11.054 "is_configured": true, 00:23:11.054 "data_offset": 0, 00:23:11.054 "data_size": 65536 00:23:11.054 }, 00:23:11.054 { 00:23:11.054 "name": "BaseBdev3", 00:23:11.054 "uuid": "d864846d-d3c0-4332-b59e-60c205d70692", 00:23:11.054 "is_configured": true, 00:23:11.054 "data_offset": 0, 00:23:11.054 "data_size": 65536 00:23:11.054 }, 00:23:11.054 { 00:23:11.054 "name": "BaseBdev4", 00:23:11.054 "uuid": "44f64c17-6a60-4497-b526-335348e028b6", 00:23:11.054 "is_configured": true, 00:23:11.054 "data_offset": 0, 00:23:11.054 "data_size": 65536 00:23:11.054 } 00:23:11.054 ] 00:23:11.054 } 00:23:11.054 } 00:23:11.054 }' 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:11.054 BaseBdev2 00:23:11.054 BaseBdev3 00:23:11.054 BaseBdev4' 00:23:11.054 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.313 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.314 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.314 [2024-12-06 13:17:17.827215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.572 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.573 "name": "Existed_Raid", 00:23:11.573 "uuid": "9f96466d-56b9-4f4c-b8e0-cba28ba696e3", 00:23:11.573 "strip_size_kb": 64, 00:23:11.573 "state": "online", 00:23:11.573 "raid_level": "raid5f", 00:23:11.573 "superblock": false, 00:23:11.573 "num_base_bdevs": 4, 00:23:11.573 "num_base_bdevs_discovered": 3, 00:23:11.573 "num_base_bdevs_operational": 3, 00:23:11.573 "base_bdevs_list": [ 00:23:11.573 { 00:23:11.573 "name": null, 00:23:11.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.573 "is_configured": false, 00:23:11.573 "data_offset": 0, 00:23:11.573 "data_size": 65536 00:23:11.573 }, 00:23:11.573 { 00:23:11.573 "name": "BaseBdev2", 00:23:11.573 "uuid": "9284dcc3-5725-4fba-8937-1c84bfcd70ce", 00:23:11.573 "is_configured": true, 00:23:11.573 "data_offset": 0, 00:23:11.573 "data_size": 65536 00:23:11.573 }, 00:23:11.573 { 00:23:11.573 "name": "BaseBdev3", 00:23:11.573 "uuid": "d864846d-d3c0-4332-b59e-60c205d70692", 00:23:11.573 "is_configured": true, 00:23:11.573 "data_offset": 0, 00:23:11.573 "data_size": 65536 00:23:11.573 }, 00:23:11.573 { 00:23:11.573 "name": "BaseBdev4", 00:23:11.573 "uuid": "44f64c17-6a60-4497-b526-335348e028b6", 00:23:11.573 "is_configured": true, 00:23:11.573 "data_offset": 0, 00:23:11.573 "data_size": 65536 00:23:11.573 } 00:23:11.573 ] 00:23:11.573 }' 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.573 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.140 [2024-12-06 13:17:18.470407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:12.140 [2024-12-06 13:17:18.470575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.140 [2024-12-06 13:17:18.561289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.140 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:12.141 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.141 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.141 [2024-12-06 13:17:18.625362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.400 [2024-12-06 13:17:18.778094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:12.400 [2024-12-06 13:17:18.778191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:12.400 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 BaseBdev2 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 [ 00:23:12.660 { 00:23:12.660 "name": "BaseBdev2", 00:23:12.660 "aliases": [ 00:23:12.660 "d6c9a7c7-6151-41b9-8d1f-04877508f977" 00:23:12.660 ], 00:23:12.660 "product_name": "Malloc disk", 00:23:12.660 "block_size": 512, 00:23:12.660 "num_blocks": 65536, 00:23:12.660 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:12.660 "assigned_rate_limits": { 00:23:12.660 "rw_ios_per_sec": 0, 00:23:12.660 "rw_mbytes_per_sec": 0, 00:23:12.660 "r_mbytes_per_sec": 0, 00:23:12.660 "w_mbytes_per_sec": 0 00:23:12.660 }, 00:23:12.660 "claimed": false, 00:23:12.660 "zoned": false, 00:23:12.660 "supported_io_types": { 00:23:12.660 "read": true, 00:23:12.660 "write": true, 00:23:12.660 "unmap": true, 00:23:12.660 "flush": true, 00:23:12.660 "reset": true, 00:23:12.660 "nvme_admin": false, 00:23:12.660 "nvme_io": false, 00:23:12.660 "nvme_io_md": false, 00:23:12.660 "write_zeroes": true, 00:23:12.660 "zcopy": true, 00:23:12.660 "get_zone_info": false, 00:23:12.660 "zone_management": false, 00:23:12.660 "zone_append": false, 00:23:12.660 "compare": false, 00:23:12.660 "compare_and_write": false, 00:23:12.660 "abort": true, 00:23:12.660 "seek_hole": false, 00:23:12.660 "seek_data": false, 00:23:12.660 "copy": true, 00:23:12.660 "nvme_iov_md": false 00:23:12.660 }, 00:23:12.660 "memory_domains": [ 00:23:12.660 { 00:23:12.660 "dma_device_id": "system", 00:23:12.660 "dma_device_type": 1 00:23:12.660 }, 00:23:12.660 { 00:23:12.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.660 "dma_device_type": 2 00:23:12.660 } 00:23:12.660 ], 00:23:12.660 "driver_specific": {} 00:23:12.660 } 00:23:12.660 ] 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 BaseBdev3 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.660 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.660 [ 00:23:12.660 { 00:23:12.660 "name": "BaseBdev3", 00:23:12.660 "aliases": [ 00:23:12.660 "7da7fabc-570a-49c6-a2c2-54076324bda4" 00:23:12.660 ], 00:23:12.660 "product_name": "Malloc disk", 00:23:12.660 "block_size": 512, 00:23:12.660 "num_blocks": 65536, 00:23:12.660 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:12.660 "assigned_rate_limits": { 00:23:12.661 "rw_ios_per_sec": 0, 00:23:12.661 "rw_mbytes_per_sec": 0, 00:23:12.661 "r_mbytes_per_sec": 0, 00:23:12.661 "w_mbytes_per_sec": 0 00:23:12.661 }, 00:23:12.661 "claimed": false, 00:23:12.661 "zoned": false, 00:23:12.661 "supported_io_types": { 00:23:12.661 "read": true, 00:23:12.661 "write": true, 00:23:12.661 "unmap": true, 00:23:12.661 "flush": true, 00:23:12.661 "reset": true, 00:23:12.661 "nvme_admin": false, 00:23:12.661 "nvme_io": false, 00:23:12.661 "nvme_io_md": false, 00:23:12.661 "write_zeroes": true, 00:23:12.661 "zcopy": true, 00:23:12.661 "get_zone_info": false, 00:23:12.661 "zone_management": false, 00:23:12.661 "zone_append": false, 00:23:12.661 "compare": false, 00:23:12.661 "compare_and_write": false, 00:23:12.661 "abort": true, 00:23:12.661 "seek_hole": false, 00:23:12.661 "seek_data": false, 00:23:12.661 "copy": true, 00:23:12.661 "nvme_iov_md": false 00:23:12.661 }, 00:23:12.661 "memory_domains": [ 00:23:12.661 { 00:23:12.661 "dma_device_id": "system", 00:23:12.661 "dma_device_type": 1 00:23:12.661 }, 00:23:12.661 { 00:23:12.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.661 "dma_device_type": 2 00:23:12.661 } 00:23:12.661 ], 00:23:12.661 "driver_specific": {} 00:23:12.661 } 00:23:12.661 ] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.661 BaseBdev4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.661 [ 00:23:12.661 { 00:23:12.661 "name": "BaseBdev4", 00:23:12.661 "aliases": [ 00:23:12.661 "75bbbd19-2f69-40b4-8619-6d33882ea735" 00:23:12.661 ], 00:23:12.661 "product_name": "Malloc disk", 00:23:12.661 "block_size": 512, 00:23:12.661 "num_blocks": 65536, 00:23:12.661 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:12.661 "assigned_rate_limits": { 00:23:12.661 "rw_ios_per_sec": 0, 00:23:12.661 "rw_mbytes_per_sec": 0, 00:23:12.661 "r_mbytes_per_sec": 0, 00:23:12.661 "w_mbytes_per_sec": 0 00:23:12.661 }, 00:23:12.661 "claimed": false, 00:23:12.661 "zoned": false, 00:23:12.661 "supported_io_types": { 00:23:12.661 "read": true, 00:23:12.661 "write": true, 00:23:12.661 "unmap": true, 00:23:12.661 "flush": true, 00:23:12.661 "reset": true, 00:23:12.661 "nvme_admin": false, 00:23:12.661 "nvme_io": false, 00:23:12.661 "nvme_io_md": false, 00:23:12.661 "write_zeroes": true, 00:23:12.661 "zcopy": true, 00:23:12.661 "get_zone_info": false, 00:23:12.661 "zone_management": false, 00:23:12.661 "zone_append": false, 00:23:12.661 "compare": false, 00:23:12.661 "compare_and_write": false, 00:23:12.661 "abort": true, 00:23:12.661 "seek_hole": false, 00:23:12.661 "seek_data": false, 00:23:12.661 "copy": true, 00:23:12.661 "nvme_iov_md": false 00:23:12.661 }, 00:23:12.661 "memory_domains": [ 00:23:12.661 { 00:23:12.661 "dma_device_id": "system", 00:23:12.661 "dma_device_type": 1 00:23:12.661 }, 00:23:12.661 { 00:23:12.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.661 "dma_device_type": 2 00:23:12.661 } 00:23:12.661 ], 00:23:12.661 "driver_specific": {} 00:23:12.661 } 00:23:12.661 ] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.661 [2024-12-06 13:17:19.164735] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:12.661 [2024-12-06 13:17:19.165263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:12.661 [2024-12-06 13:17:19.165414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.661 [2024-12-06 13:17:19.170835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:12.661 [2024-12-06 13:17:19.171241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.661 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.921 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.921 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:12.921 "name": "Existed_Raid", 00:23:12.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.921 "strip_size_kb": 64, 00:23:12.921 "state": "configuring", 00:23:12.921 "raid_level": "raid5f", 00:23:12.921 "superblock": false, 00:23:12.921 "num_base_bdevs": 4, 00:23:12.921 "num_base_bdevs_discovered": 3, 00:23:12.921 "num_base_bdevs_operational": 4, 00:23:12.921 "base_bdevs_list": [ 00:23:12.921 { 00:23:12.921 "name": "BaseBdev1", 00:23:12.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.921 "is_configured": false, 00:23:12.921 "data_offset": 0, 00:23:12.921 "data_size": 0 00:23:12.921 }, 00:23:12.921 { 00:23:12.921 "name": "BaseBdev2", 00:23:12.921 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:12.921 "is_configured": true, 00:23:12.921 "data_offset": 0, 00:23:12.921 "data_size": 65536 00:23:12.921 }, 00:23:12.921 { 00:23:12.921 "name": "BaseBdev3", 00:23:12.921 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:12.921 "is_configured": true, 00:23:12.921 "data_offset": 0, 00:23:12.921 "data_size": 65536 00:23:12.921 }, 00:23:12.921 { 00:23:12.921 "name": "BaseBdev4", 00:23:12.921 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:12.921 "is_configured": true, 00:23:12.921 "data_offset": 0, 00:23:12.921 "data_size": 65536 00:23:12.921 } 00:23:12.921 ] 00:23:12.921 }' 00:23:12.921 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:12.921 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.181 [2024-12-06 13:17:19.691747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.181 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.440 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.440 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.440 "name": "Existed_Raid", 00:23:13.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.440 "strip_size_kb": 64, 00:23:13.440 "state": "configuring", 00:23:13.440 "raid_level": "raid5f", 00:23:13.440 "superblock": false, 00:23:13.440 "num_base_bdevs": 4, 00:23:13.440 "num_base_bdevs_discovered": 2, 00:23:13.440 "num_base_bdevs_operational": 4, 00:23:13.440 "base_bdevs_list": [ 00:23:13.440 { 00:23:13.440 "name": "BaseBdev1", 00:23:13.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.440 "is_configured": false, 00:23:13.440 "data_offset": 0, 00:23:13.440 "data_size": 0 00:23:13.440 }, 00:23:13.440 { 00:23:13.440 "name": null, 00:23:13.440 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:13.440 "is_configured": false, 00:23:13.440 "data_offset": 0, 00:23:13.440 "data_size": 65536 00:23:13.440 }, 00:23:13.440 { 00:23:13.440 "name": "BaseBdev3", 00:23:13.440 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:13.440 "is_configured": true, 00:23:13.440 "data_offset": 0, 00:23:13.440 "data_size": 65536 00:23:13.440 }, 00:23:13.440 { 00:23:13.440 "name": "BaseBdev4", 00:23:13.440 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:13.440 "is_configured": true, 00:23:13.440 "data_offset": 0, 00:23:13.440 "data_size": 65536 00:23:13.440 } 00:23:13.440 ] 00:23:13.440 }' 00:23:13.440 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.440 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.007 [2024-12-06 13:17:20.334360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.007 BaseBdev1 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.007 [ 00:23:14.007 { 00:23:14.007 "name": "BaseBdev1", 00:23:14.007 "aliases": [ 00:23:14.007 "74cb452f-6355-445f-80ce-1449688bd9cb" 00:23:14.007 ], 00:23:14.007 "product_name": "Malloc disk", 00:23:14.007 "block_size": 512, 00:23:14.007 "num_blocks": 65536, 00:23:14.007 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:14.007 "assigned_rate_limits": { 00:23:14.007 "rw_ios_per_sec": 0, 00:23:14.007 "rw_mbytes_per_sec": 0, 00:23:14.007 "r_mbytes_per_sec": 0, 00:23:14.007 "w_mbytes_per_sec": 0 00:23:14.007 }, 00:23:14.007 "claimed": true, 00:23:14.007 "claim_type": "exclusive_write", 00:23:14.007 "zoned": false, 00:23:14.007 "supported_io_types": { 00:23:14.007 "read": true, 00:23:14.007 "write": true, 00:23:14.007 "unmap": true, 00:23:14.007 "flush": true, 00:23:14.007 "reset": true, 00:23:14.007 "nvme_admin": false, 00:23:14.007 "nvme_io": false, 00:23:14.007 "nvme_io_md": false, 00:23:14.007 "write_zeroes": true, 00:23:14.007 "zcopy": true, 00:23:14.007 "get_zone_info": false, 00:23:14.007 "zone_management": false, 00:23:14.007 "zone_append": false, 00:23:14.007 "compare": false, 00:23:14.007 "compare_and_write": false, 00:23:14.007 "abort": true, 00:23:14.007 "seek_hole": false, 00:23:14.007 "seek_data": false, 00:23:14.007 "copy": true, 00:23:14.007 "nvme_iov_md": false 00:23:14.007 }, 00:23:14.007 "memory_domains": [ 00:23:14.007 { 00:23:14.007 "dma_device_id": "system", 00:23:14.007 "dma_device_type": 1 00:23:14.007 }, 00:23:14.007 { 00:23:14.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.007 "dma_device_type": 2 00:23:14.007 } 00:23:14.007 ], 00:23:14.007 "driver_specific": {} 00:23:14.007 } 00:23:14.007 ] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:14.007 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.008 "name": "Existed_Raid", 00:23:14.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.008 "strip_size_kb": 64, 00:23:14.008 "state": "configuring", 00:23:14.008 "raid_level": "raid5f", 00:23:14.008 "superblock": false, 00:23:14.008 "num_base_bdevs": 4, 00:23:14.008 "num_base_bdevs_discovered": 3, 00:23:14.008 "num_base_bdevs_operational": 4, 00:23:14.008 "base_bdevs_list": [ 00:23:14.008 { 00:23:14.008 "name": "BaseBdev1", 00:23:14.008 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:14.008 "is_configured": true, 00:23:14.008 "data_offset": 0, 00:23:14.008 "data_size": 65536 00:23:14.008 }, 00:23:14.008 { 00:23:14.008 "name": null, 00:23:14.008 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:14.008 "is_configured": false, 00:23:14.008 "data_offset": 0, 00:23:14.008 "data_size": 65536 00:23:14.008 }, 00:23:14.008 { 00:23:14.008 "name": "BaseBdev3", 00:23:14.008 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:14.008 "is_configured": true, 00:23:14.008 "data_offset": 0, 00:23:14.008 "data_size": 65536 00:23:14.008 }, 00:23:14.008 { 00:23:14.008 "name": "BaseBdev4", 00:23:14.008 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:14.008 "is_configured": true, 00:23:14.008 "data_offset": 0, 00:23:14.008 "data_size": 65536 00:23:14.008 } 00:23:14.008 ] 00:23:14.008 }' 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.008 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.577 [2024-12-06 13:17:20.954578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.577 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.577 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.577 "name": "Existed_Raid", 00:23:14.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.577 "strip_size_kb": 64, 00:23:14.577 "state": "configuring", 00:23:14.577 "raid_level": "raid5f", 00:23:14.577 "superblock": false, 00:23:14.577 "num_base_bdevs": 4, 00:23:14.577 "num_base_bdevs_discovered": 2, 00:23:14.577 "num_base_bdevs_operational": 4, 00:23:14.577 "base_bdevs_list": [ 00:23:14.577 { 00:23:14.577 "name": "BaseBdev1", 00:23:14.577 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:14.577 "is_configured": true, 00:23:14.577 "data_offset": 0, 00:23:14.577 "data_size": 65536 00:23:14.577 }, 00:23:14.577 { 00:23:14.577 "name": null, 00:23:14.577 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:14.577 "is_configured": false, 00:23:14.577 "data_offset": 0, 00:23:14.577 "data_size": 65536 00:23:14.577 }, 00:23:14.577 { 00:23:14.577 "name": null, 00:23:14.577 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:14.577 "is_configured": false, 00:23:14.577 "data_offset": 0, 00:23:14.577 "data_size": 65536 00:23:14.577 }, 00:23:14.577 { 00:23:14.577 "name": "BaseBdev4", 00:23:14.577 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:14.577 "is_configured": true, 00:23:14.577 "data_offset": 0, 00:23:14.577 "data_size": 65536 00:23:14.577 } 00:23:14.577 ] 00:23:14.577 }' 00:23:14.577 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.577 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.144 [2024-12-06 13:17:21.538765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.144 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.144 "name": "Existed_Raid", 00:23:15.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.145 "strip_size_kb": 64, 00:23:15.145 "state": "configuring", 00:23:15.145 "raid_level": "raid5f", 00:23:15.145 "superblock": false, 00:23:15.145 "num_base_bdevs": 4, 00:23:15.145 "num_base_bdevs_discovered": 3, 00:23:15.145 "num_base_bdevs_operational": 4, 00:23:15.145 "base_bdevs_list": [ 00:23:15.145 { 00:23:15.145 "name": "BaseBdev1", 00:23:15.145 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:15.145 "is_configured": true, 00:23:15.145 "data_offset": 0, 00:23:15.145 "data_size": 65536 00:23:15.145 }, 00:23:15.145 { 00:23:15.145 "name": null, 00:23:15.145 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:15.145 "is_configured": false, 00:23:15.145 "data_offset": 0, 00:23:15.145 "data_size": 65536 00:23:15.145 }, 00:23:15.145 { 00:23:15.145 "name": "BaseBdev3", 00:23:15.145 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:15.145 "is_configured": true, 00:23:15.145 "data_offset": 0, 00:23:15.145 "data_size": 65536 00:23:15.145 }, 00:23:15.145 { 00:23:15.145 "name": "BaseBdev4", 00:23:15.145 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:15.145 "is_configured": true, 00:23:15.145 "data_offset": 0, 00:23:15.145 "data_size": 65536 00:23:15.145 } 00:23:15.145 ] 00:23:15.145 }' 00:23:15.145 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.145 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.715 [2024-12-06 13:17:22.143019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.715 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.974 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.974 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.974 "name": "Existed_Raid", 00:23:15.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.974 "strip_size_kb": 64, 00:23:15.974 "state": "configuring", 00:23:15.974 "raid_level": "raid5f", 00:23:15.974 "superblock": false, 00:23:15.974 "num_base_bdevs": 4, 00:23:15.974 "num_base_bdevs_discovered": 2, 00:23:15.974 "num_base_bdevs_operational": 4, 00:23:15.974 "base_bdevs_list": [ 00:23:15.974 { 00:23:15.974 "name": null, 00:23:15.974 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:15.974 "is_configured": false, 00:23:15.974 "data_offset": 0, 00:23:15.974 "data_size": 65536 00:23:15.974 }, 00:23:15.974 { 00:23:15.974 "name": null, 00:23:15.974 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:15.974 "is_configured": false, 00:23:15.974 "data_offset": 0, 00:23:15.974 "data_size": 65536 00:23:15.974 }, 00:23:15.974 { 00:23:15.974 "name": "BaseBdev3", 00:23:15.974 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:15.974 "is_configured": true, 00:23:15.974 "data_offset": 0, 00:23:15.974 "data_size": 65536 00:23:15.974 }, 00:23:15.974 { 00:23:15.974 "name": "BaseBdev4", 00:23:15.974 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:15.974 "is_configured": true, 00:23:15.974 "data_offset": 0, 00:23:15.974 "data_size": 65536 00:23:15.974 } 00:23:15.974 ] 00:23:15.974 }' 00:23:15.974 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.974 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.233 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.233 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.233 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.233 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 [2024-12-06 13:17:22.803298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.516 "name": "Existed_Raid", 00:23:16.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.516 "strip_size_kb": 64, 00:23:16.516 "state": "configuring", 00:23:16.516 "raid_level": "raid5f", 00:23:16.516 "superblock": false, 00:23:16.516 "num_base_bdevs": 4, 00:23:16.516 "num_base_bdevs_discovered": 3, 00:23:16.516 "num_base_bdevs_operational": 4, 00:23:16.516 "base_bdevs_list": [ 00:23:16.516 { 00:23:16.516 "name": null, 00:23:16.516 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:16.516 "is_configured": false, 00:23:16.516 "data_offset": 0, 00:23:16.516 "data_size": 65536 00:23:16.516 }, 00:23:16.516 { 00:23:16.516 "name": "BaseBdev2", 00:23:16.516 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:16.516 "is_configured": true, 00:23:16.516 "data_offset": 0, 00:23:16.516 "data_size": 65536 00:23:16.516 }, 00:23:16.516 { 00:23:16.516 "name": "BaseBdev3", 00:23:16.516 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:16.516 "is_configured": true, 00:23:16.516 "data_offset": 0, 00:23:16.516 "data_size": 65536 00:23:16.516 }, 00:23:16.516 { 00:23:16.516 "name": "BaseBdev4", 00:23:16.516 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:16.516 "is_configured": true, 00:23:16.516 "data_offset": 0, 00:23:16.516 "data_size": 65536 00:23:16.516 } 00:23:16.516 ] 00:23:16.516 }' 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.516 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 74cb452f-6355-445f-80ce-1449688bd9cb 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 [2024-12-06 13:17:23.446474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:17.084 [2024-12-06 13:17:23.446779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:17.084 [2024-12-06 13:17:23.446803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:17.084 [2024-12-06 13:17:23.447192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:17.084 [2024-12-06 13:17:23.453803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:17.084 [2024-12-06 13:17:23.453973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:17.084 [2024-12-06 13:17:23.454343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.084 NewBaseBdev 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 [ 00:23:17.084 { 00:23:17.084 "name": "NewBaseBdev", 00:23:17.084 "aliases": [ 00:23:17.084 "74cb452f-6355-445f-80ce-1449688bd9cb" 00:23:17.084 ], 00:23:17.084 "product_name": "Malloc disk", 00:23:17.084 "block_size": 512, 00:23:17.084 "num_blocks": 65536, 00:23:17.084 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:17.084 "assigned_rate_limits": { 00:23:17.084 "rw_ios_per_sec": 0, 00:23:17.084 "rw_mbytes_per_sec": 0, 00:23:17.084 "r_mbytes_per_sec": 0, 00:23:17.084 "w_mbytes_per_sec": 0 00:23:17.084 }, 00:23:17.084 "claimed": true, 00:23:17.084 "claim_type": "exclusive_write", 00:23:17.084 "zoned": false, 00:23:17.084 "supported_io_types": { 00:23:17.084 "read": true, 00:23:17.084 "write": true, 00:23:17.084 "unmap": true, 00:23:17.084 "flush": true, 00:23:17.084 "reset": true, 00:23:17.084 "nvme_admin": false, 00:23:17.084 "nvme_io": false, 00:23:17.084 "nvme_io_md": false, 00:23:17.084 "write_zeroes": true, 00:23:17.084 "zcopy": true, 00:23:17.084 "get_zone_info": false, 00:23:17.084 "zone_management": false, 00:23:17.084 "zone_append": false, 00:23:17.084 "compare": false, 00:23:17.084 "compare_and_write": false, 00:23:17.084 "abort": true, 00:23:17.084 "seek_hole": false, 00:23:17.084 "seek_data": false, 00:23:17.084 "copy": true, 00:23:17.084 "nvme_iov_md": false 00:23:17.084 }, 00:23:17.084 "memory_domains": [ 00:23:17.084 { 00:23:17.084 "dma_device_id": "system", 00:23:17.084 "dma_device_type": 1 00:23:17.084 }, 00:23:17.084 { 00:23:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.084 "dma_device_type": 2 00:23:17.084 } 00:23:17.084 ], 00:23:17.084 "driver_specific": {} 00:23:17.084 } 00:23:17.084 ] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.084 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.084 "name": "Existed_Raid", 00:23:17.084 "uuid": "330e4a19-d74e-486f-b57e-7a487423a625", 00:23:17.084 "strip_size_kb": 64, 00:23:17.084 "state": "online", 00:23:17.084 "raid_level": "raid5f", 00:23:17.084 "superblock": false, 00:23:17.084 "num_base_bdevs": 4, 00:23:17.084 "num_base_bdevs_discovered": 4, 00:23:17.084 "num_base_bdevs_operational": 4, 00:23:17.084 "base_bdevs_list": [ 00:23:17.084 { 00:23:17.084 "name": "NewBaseBdev", 00:23:17.084 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:17.084 "is_configured": true, 00:23:17.084 "data_offset": 0, 00:23:17.084 "data_size": 65536 00:23:17.084 }, 00:23:17.084 { 00:23:17.084 "name": "BaseBdev2", 00:23:17.084 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:17.084 "is_configured": true, 00:23:17.084 "data_offset": 0, 00:23:17.084 "data_size": 65536 00:23:17.084 }, 00:23:17.084 { 00:23:17.084 "name": "BaseBdev3", 00:23:17.084 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:17.084 "is_configured": true, 00:23:17.084 "data_offset": 0, 00:23:17.084 "data_size": 65536 00:23:17.084 }, 00:23:17.084 { 00:23:17.084 "name": "BaseBdev4", 00:23:17.084 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:17.085 "is_configured": true, 00:23:17.085 "data_offset": 0, 00:23:17.085 "data_size": 65536 00:23:17.085 } 00:23:17.085 ] 00:23:17.085 }' 00:23:17.085 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.085 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:17.653 [2024-12-06 13:17:24.038269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.653 "name": "Existed_Raid", 00:23:17.653 "aliases": [ 00:23:17.653 "330e4a19-d74e-486f-b57e-7a487423a625" 00:23:17.653 ], 00:23:17.653 "product_name": "Raid Volume", 00:23:17.653 "block_size": 512, 00:23:17.653 "num_blocks": 196608, 00:23:17.653 "uuid": "330e4a19-d74e-486f-b57e-7a487423a625", 00:23:17.653 "assigned_rate_limits": { 00:23:17.653 "rw_ios_per_sec": 0, 00:23:17.653 "rw_mbytes_per_sec": 0, 00:23:17.653 "r_mbytes_per_sec": 0, 00:23:17.653 "w_mbytes_per_sec": 0 00:23:17.653 }, 00:23:17.653 "claimed": false, 00:23:17.653 "zoned": false, 00:23:17.653 "supported_io_types": { 00:23:17.653 "read": true, 00:23:17.653 "write": true, 00:23:17.653 "unmap": false, 00:23:17.653 "flush": false, 00:23:17.653 "reset": true, 00:23:17.653 "nvme_admin": false, 00:23:17.653 "nvme_io": false, 00:23:17.653 "nvme_io_md": false, 00:23:17.653 "write_zeroes": true, 00:23:17.653 "zcopy": false, 00:23:17.653 "get_zone_info": false, 00:23:17.653 "zone_management": false, 00:23:17.653 "zone_append": false, 00:23:17.653 "compare": false, 00:23:17.653 "compare_and_write": false, 00:23:17.653 "abort": false, 00:23:17.653 "seek_hole": false, 00:23:17.653 "seek_data": false, 00:23:17.653 "copy": false, 00:23:17.653 "nvme_iov_md": false 00:23:17.653 }, 00:23:17.653 "driver_specific": { 00:23:17.653 "raid": { 00:23:17.653 "uuid": "330e4a19-d74e-486f-b57e-7a487423a625", 00:23:17.653 "strip_size_kb": 64, 00:23:17.653 "state": "online", 00:23:17.653 "raid_level": "raid5f", 00:23:17.653 "superblock": false, 00:23:17.653 "num_base_bdevs": 4, 00:23:17.653 "num_base_bdevs_discovered": 4, 00:23:17.653 "num_base_bdevs_operational": 4, 00:23:17.653 "base_bdevs_list": [ 00:23:17.653 { 00:23:17.653 "name": "NewBaseBdev", 00:23:17.653 "uuid": "74cb452f-6355-445f-80ce-1449688bd9cb", 00:23:17.653 "is_configured": true, 00:23:17.653 "data_offset": 0, 00:23:17.653 "data_size": 65536 00:23:17.653 }, 00:23:17.653 { 00:23:17.653 "name": "BaseBdev2", 00:23:17.653 "uuid": "d6c9a7c7-6151-41b9-8d1f-04877508f977", 00:23:17.653 "is_configured": true, 00:23:17.653 "data_offset": 0, 00:23:17.653 "data_size": 65536 00:23:17.653 }, 00:23:17.653 { 00:23:17.653 "name": "BaseBdev3", 00:23:17.653 "uuid": "7da7fabc-570a-49c6-a2c2-54076324bda4", 00:23:17.653 "is_configured": true, 00:23:17.653 "data_offset": 0, 00:23:17.653 "data_size": 65536 00:23:17.653 }, 00:23:17.653 { 00:23:17.653 "name": "BaseBdev4", 00:23:17.653 "uuid": "75bbbd19-2f69-40b4-8619-6d33882ea735", 00:23:17.653 "is_configured": true, 00:23:17.653 "data_offset": 0, 00:23:17.653 "data_size": 65536 00:23:17.653 } 00:23:17.653 ] 00:23:17.653 } 00:23:17.653 } 00:23:17.653 }' 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:17.653 BaseBdev2 00:23:17.653 BaseBdev3 00:23:17.653 BaseBdev4' 00:23:17.653 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.912 [2024-12-06 13:17:24.418006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.912 [2024-12-06 13:17:24.418045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.912 [2024-12-06 13:17:24.418146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.912 [2024-12-06 13:17:24.418592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.912 [2024-12-06 13:17:24.418628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83501 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83501 ']' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83501 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.912 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83501 00:23:18.171 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.171 killing process with pid 83501 00:23:18.171 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.171 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83501' 00:23:18.171 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83501 00:23:18.171 [2024-12-06 13:17:24.461117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:18.171 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83501 00:23:18.430 [2024-12-06 13:17:24.825908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:19.818 00:23:19.818 real 0m12.989s 00:23:19.818 user 0m21.326s 00:23:19.818 sys 0m1.961s 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.818 ************************************ 00:23:19.818 END TEST raid5f_state_function_test 00:23:19.818 ************************************ 00:23:19.818 13:17:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:19.818 13:17:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:19.818 13:17:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.818 13:17:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:19.818 ************************************ 00:23:19.818 START TEST raid5f_state_function_test_sb 00:23:19.818 ************************************ 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84183 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:19.818 Process raid pid: 84183 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84183' 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84183 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84183 ']' 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.818 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.819 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.819 13:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.819 [2024-12-06 13:17:26.091751] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:19.819 [2024-12-06 13:17:26.092039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.819 [2024-12-06 13:17:26.280944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.077 [2024-12-06 13:17:26.414852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.335 [2024-12-06 13:17:26.629817] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.335 [2024-12-06 13:17:26.629870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.595 [2024-12-06 13:17:27.084070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.595 [2024-12-06 13:17:27.084145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.595 [2024-12-06 13:17:27.084162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:20.595 [2024-12-06 13:17:27.084178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:20.595 [2024-12-06 13:17:27.084188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:20.595 [2024-12-06 13:17:27.084202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:20.595 [2024-12-06 13:17:27.084211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:20.595 [2024-12-06 13:17:27.084225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.595 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.854 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.854 "name": "Existed_Raid", 00:23:20.854 "uuid": "210f84cc-03b6-478f-b71a-d36028b4e940", 00:23:20.854 "strip_size_kb": 64, 00:23:20.854 "state": "configuring", 00:23:20.854 "raid_level": "raid5f", 00:23:20.854 "superblock": true, 00:23:20.854 "num_base_bdevs": 4, 00:23:20.854 "num_base_bdevs_discovered": 0, 00:23:20.854 "num_base_bdevs_operational": 4, 00:23:20.854 "base_bdevs_list": [ 00:23:20.854 { 00:23:20.854 "name": "BaseBdev1", 00:23:20.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.854 "is_configured": false, 00:23:20.854 "data_offset": 0, 00:23:20.854 "data_size": 0 00:23:20.854 }, 00:23:20.854 { 00:23:20.854 "name": "BaseBdev2", 00:23:20.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.854 "is_configured": false, 00:23:20.854 "data_offset": 0, 00:23:20.854 "data_size": 0 00:23:20.854 }, 00:23:20.854 { 00:23:20.854 "name": "BaseBdev3", 00:23:20.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.854 "is_configured": false, 00:23:20.854 "data_offset": 0, 00:23:20.854 "data_size": 0 00:23:20.854 }, 00:23:20.854 { 00:23:20.854 "name": "BaseBdev4", 00:23:20.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.854 "is_configured": false, 00:23:20.854 "data_offset": 0, 00:23:20.854 "data_size": 0 00:23:20.854 } 00:23:20.854 ] 00:23:20.854 }' 00:23:20.854 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.854 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.112 [2024-12-06 13:17:27.616094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:21.112 [2024-12-06 13:17:27.616282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.112 [2024-12-06 13:17:27.624096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:21.112 [2024-12-06 13:17:27.624272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:21.112 [2024-12-06 13:17:27.624396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:21.112 [2024-12-06 13:17:27.624568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:21.112 [2024-12-06 13:17:27.624696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:21.112 [2024-12-06 13:17:27.624767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:21.112 [2024-12-06 13:17:27.624874] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:21.112 [2024-12-06 13:17:27.624933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.112 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.369 [2024-12-06 13:17:27.669091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.369 BaseBdev1 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.369 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.370 [ 00:23:21.370 { 00:23:21.370 "name": "BaseBdev1", 00:23:21.370 "aliases": [ 00:23:21.370 "65297ac5-f588-45e3-8c33-297a1abf0e6f" 00:23:21.370 ], 00:23:21.370 "product_name": "Malloc disk", 00:23:21.370 "block_size": 512, 00:23:21.370 "num_blocks": 65536, 00:23:21.370 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:21.370 "assigned_rate_limits": { 00:23:21.370 "rw_ios_per_sec": 0, 00:23:21.370 "rw_mbytes_per_sec": 0, 00:23:21.370 "r_mbytes_per_sec": 0, 00:23:21.370 "w_mbytes_per_sec": 0 00:23:21.370 }, 00:23:21.370 "claimed": true, 00:23:21.370 "claim_type": "exclusive_write", 00:23:21.370 "zoned": false, 00:23:21.370 "supported_io_types": { 00:23:21.370 "read": true, 00:23:21.370 "write": true, 00:23:21.370 "unmap": true, 00:23:21.370 "flush": true, 00:23:21.370 "reset": true, 00:23:21.370 "nvme_admin": false, 00:23:21.370 "nvme_io": false, 00:23:21.370 "nvme_io_md": false, 00:23:21.370 "write_zeroes": true, 00:23:21.370 "zcopy": true, 00:23:21.370 "get_zone_info": false, 00:23:21.370 "zone_management": false, 00:23:21.370 "zone_append": false, 00:23:21.370 "compare": false, 00:23:21.370 "compare_and_write": false, 00:23:21.370 "abort": true, 00:23:21.370 "seek_hole": false, 00:23:21.370 "seek_data": false, 00:23:21.370 "copy": true, 00:23:21.370 "nvme_iov_md": false 00:23:21.370 }, 00:23:21.370 "memory_domains": [ 00:23:21.370 { 00:23:21.370 "dma_device_id": "system", 00:23:21.370 "dma_device_type": 1 00:23:21.370 }, 00:23:21.370 { 00:23:21.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.370 "dma_device_type": 2 00:23:21.370 } 00:23:21.370 ], 00:23:21.370 "driver_specific": {} 00:23:21.370 } 00:23:21.370 ] 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.370 "name": "Existed_Raid", 00:23:21.370 "uuid": "ae753279-cda0-4fc3-9337-a2f81e2b7ef4", 00:23:21.370 "strip_size_kb": 64, 00:23:21.370 "state": "configuring", 00:23:21.370 "raid_level": "raid5f", 00:23:21.370 "superblock": true, 00:23:21.370 "num_base_bdevs": 4, 00:23:21.370 "num_base_bdevs_discovered": 1, 00:23:21.370 "num_base_bdevs_operational": 4, 00:23:21.370 "base_bdevs_list": [ 00:23:21.370 { 00:23:21.370 "name": "BaseBdev1", 00:23:21.370 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:21.370 "is_configured": true, 00:23:21.370 "data_offset": 2048, 00:23:21.370 "data_size": 63488 00:23:21.370 }, 00:23:21.370 { 00:23:21.370 "name": "BaseBdev2", 00:23:21.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.370 "is_configured": false, 00:23:21.370 "data_offset": 0, 00:23:21.370 "data_size": 0 00:23:21.370 }, 00:23:21.370 { 00:23:21.370 "name": "BaseBdev3", 00:23:21.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.370 "is_configured": false, 00:23:21.370 "data_offset": 0, 00:23:21.370 "data_size": 0 00:23:21.370 }, 00:23:21.370 { 00:23:21.370 "name": "BaseBdev4", 00:23:21.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.370 "is_configured": false, 00:23:21.370 "data_offset": 0, 00:23:21.370 "data_size": 0 00:23:21.370 } 00:23:21.370 ] 00:23:21.370 }' 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.370 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.023 [2024-12-06 13:17:28.213301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.023 [2024-12-06 13:17:28.213366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.023 [2024-12-06 13:17:28.221362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.023 [2024-12-06 13:17:28.223860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.023 [2024-12-06 13:17:28.223926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.023 [2024-12-06 13:17:28.223943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.023 [2024-12-06 13:17:28.223961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.023 [2024-12-06 13:17:28.223972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.023 [2024-12-06 13:17:28.223985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:22.023 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.024 "name": "Existed_Raid", 00:23:22.024 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:22.024 "strip_size_kb": 64, 00:23:22.024 "state": "configuring", 00:23:22.024 "raid_level": "raid5f", 00:23:22.024 "superblock": true, 00:23:22.024 "num_base_bdevs": 4, 00:23:22.024 "num_base_bdevs_discovered": 1, 00:23:22.024 "num_base_bdevs_operational": 4, 00:23:22.024 "base_bdevs_list": [ 00:23:22.024 { 00:23:22.024 "name": "BaseBdev1", 00:23:22.024 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:22.024 "is_configured": true, 00:23:22.024 "data_offset": 2048, 00:23:22.024 "data_size": 63488 00:23:22.024 }, 00:23:22.024 { 00:23:22.024 "name": "BaseBdev2", 00:23:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.024 "is_configured": false, 00:23:22.024 "data_offset": 0, 00:23:22.024 "data_size": 0 00:23:22.024 }, 00:23:22.024 { 00:23:22.024 "name": "BaseBdev3", 00:23:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.024 "is_configured": false, 00:23:22.024 "data_offset": 0, 00:23:22.024 "data_size": 0 00:23:22.024 }, 00:23:22.024 { 00:23:22.024 "name": "BaseBdev4", 00:23:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.024 "is_configured": false, 00:23:22.024 "data_offset": 0, 00:23:22.024 "data_size": 0 00:23:22.024 } 00:23:22.024 ] 00:23:22.024 }' 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.024 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.282 [2024-12-06 13:17:28.768250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.282 BaseBdev2 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.282 [ 00:23:22.282 { 00:23:22.282 "name": "BaseBdev2", 00:23:22.282 "aliases": [ 00:23:22.282 "6c33d90f-6a4f-4510-bf33-a846e114c7ae" 00:23:22.282 ], 00:23:22.282 "product_name": "Malloc disk", 00:23:22.282 "block_size": 512, 00:23:22.282 "num_blocks": 65536, 00:23:22.282 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:22.282 "assigned_rate_limits": { 00:23:22.282 "rw_ios_per_sec": 0, 00:23:22.282 "rw_mbytes_per_sec": 0, 00:23:22.282 "r_mbytes_per_sec": 0, 00:23:22.282 "w_mbytes_per_sec": 0 00:23:22.282 }, 00:23:22.282 "claimed": true, 00:23:22.282 "claim_type": "exclusive_write", 00:23:22.282 "zoned": false, 00:23:22.282 "supported_io_types": { 00:23:22.282 "read": true, 00:23:22.282 "write": true, 00:23:22.282 "unmap": true, 00:23:22.282 "flush": true, 00:23:22.282 "reset": true, 00:23:22.282 "nvme_admin": false, 00:23:22.282 "nvme_io": false, 00:23:22.282 "nvme_io_md": false, 00:23:22.282 "write_zeroes": true, 00:23:22.282 "zcopy": true, 00:23:22.282 "get_zone_info": false, 00:23:22.282 "zone_management": false, 00:23:22.282 "zone_append": false, 00:23:22.282 "compare": false, 00:23:22.282 "compare_and_write": false, 00:23:22.282 "abort": true, 00:23:22.282 "seek_hole": false, 00:23:22.282 "seek_data": false, 00:23:22.282 "copy": true, 00:23:22.282 "nvme_iov_md": false 00:23:22.282 }, 00:23:22.282 "memory_domains": [ 00:23:22.282 { 00:23:22.282 "dma_device_id": "system", 00:23:22.282 "dma_device_type": 1 00:23:22.282 }, 00:23:22.282 { 00:23:22.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.282 "dma_device_type": 2 00:23:22.282 } 00:23:22.282 ], 00:23:22.282 "driver_specific": {} 00:23:22.282 } 00:23:22.282 ] 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.282 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.541 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.541 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.541 "name": "Existed_Raid", 00:23:22.541 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:22.541 "strip_size_kb": 64, 00:23:22.541 "state": "configuring", 00:23:22.541 "raid_level": "raid5f", 00:23:22.541 "superblock": true, 00:23:22.541 "num_base_bdevs": 4, 00:23:22.541 "num_base_bdevs_discovered": 2, 00:23:22.541 "num_base_bdevs_operational": 4, 00:23:22.541 "base_bdevs_list": [ 00:23:22.541 { 00:23:22.541 "name": "BaseBdev1", 00:23:22.541 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:22.541 "is_configured": true, 00:23:22.541 "data_offset": 2048, 00:23:22.541 "data_size": 63488 00:23:22.541 }, 00:23:22.541 { 00:23:22.541 "name": "BaseBdev2", 00:23:22.541 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:22.541 "is_configured": true, 00:23:22.541 "data_offset": 2048, 00:23:22.541 "data_size": 63488 00:23:22.541 }, 00:23:22.541 { 00:23:22.541 "name": "BaseBdev3", 00:23:22.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.541 "is_configured": false, 00:23:22.541 "data_offset": 0, 00:23:22.541 "data_size": 0 00:23:22.542 }, 00:23:22.542 { 00:23:22.542 "name": "BaseBdev4", 00:23:22.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.542 "is_configured": false, 00:23:22.542 "data_offset": 0, 00:23:22.542 "data_size": 0 00:23:22.542 } 00:23:22.542 ] 00:23:22.542 }' 00:23:22.542 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.542 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.800 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:22.800 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.800 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.058 [2024-12-06 13:17:29.334934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:23.058 BaseBdev3 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.058 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.058 [ 00:23:23.058 { 00:23:23.058 "name": "BaseBdev3", 00:23:23.058 "aliases": [ 00:23:23.058 "b546c5e4-cb48-453f-996b-86d9d3ac22a7" 00:23:23.058 ], 00:23:23.058 "product_name": "Malloc disk", 00:23:23.058 "block_size": 512, 00:23:23.058 "num_blocks": 65536, 00:23:23.058 "uuid": "b546c5e4-cb48-453f-996b-86d9d3ac22a7", 00:23:23.058 "assigned_rate_limits": { 00:23:23.058 "rw_ios_per_sec": 0, 00:23:23.058 "rw_mbytes_per_sec": 0, 00:23:23.058 "r_mbytes_per_sec": 0, 00:23:23.058 "w_mbytes_per_sec": 0 00:23:23.058 }, 00:23:23.058 "claimed": true, 00:23:23.058 "claim_type": "exclusive_write", 00:23:23.058 "zoned": false, 00:23:23.058 "supported_io_types": { 00:23:23.058 "read": true, 00:23:23.058 "write": true, 00:23:23.058 "unmap": true, 00:23:23.058 "flush": true, 00:23:23.058 "reset": true, 00:23:23.058 "nvme_admin": false, 00:23:23.058 "nvme_io": false, 00:23:23.058 "nvme_io_md": false, 00:23:23.058 "write_zeroes": true, 00:23:23.058 "zcopy": true, 00:23:23.058 "get_zone_info": false, 00:23:23.058 "zone_management": false, 00:23:23.058 "zone_append": false, 00:23:23.058 "compare": false, 00:23:23.058 "compare_and_write": false, 00:23:23.058 "abort": true, 00:23:23.058 "seek_hole": false, 00:23:23.058 "seek_data": false, 00:23:23.058 "copy": true, 00:23:23.059 "nvme_iov_md": false 00:23:23.059 }, 00:23:23.059 "memory_domains": [ 00:23:23.059 { 00:23:23.059 "dma_device_id": "system", 00:23:23.059 "dma_device_type": 1 00:23:23.059 }, 00:23:23.059 { 00:23:23.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.059 "dma_device_type": 2 00:23:23.059 } 00:23:23.059 ], 00:23:23.059 "driver_specific": {} 00:23:23.059 } 00:23:23.059 ] 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.059 "name": "Existed_Raid", 00:23:23.059 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:23.059 "strip_size_kb": 64, 00:23:23.059 "state": "configuring", 00:23:23.059 "raid_level": "raid5f", 00:23:23.059 "superblock": true, 00:23:23.059 "num_base_bdevs": 4, 00:23:23.059 "num_base_bdevs_discovered": 3, 00:23:23.059 "num_base_bdevs_operational": 4, 00:23:23.059 "base_bdevs_list": [ 00:23:23.059 { 00:23:23.059 "name": "BaseBdev1", 00:23:23.059 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:23.059 "is_configured": true, 00:23:23.059 "data_offset": 2048, 00:23:23.059 "data_size": 63488 00:23:23.059 }, 00:23:23.059 { 00:23:23.059 "name": "BaseBdev2", 00:23:23.059 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:23.059 "is_configured": true, 00:23:23.059 "data_offset": 2048, 00:23:23.059 "data_size": 63488 00:23:23.059 }, 00:23:23.059 { 00:23:23.059 "name": "BaseBdev3", 00:23:23.059 "uuid": "b546c5e4-cb48-453f-996b-86d9d3ac22a7", 00:23:23.059 "is_configured": true, 00:23:23.059 "data_offset": 2048, 00:23:23.059 "data_size": 63488 00:23:23.059 }, 00:23:23.059 { 00:23:23.059 "name": "BaseBdev4", 00:23:23.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.059 "is_configured": false, 00:23:23.059 "data_offset": 0, 00:23:23.059 "data_size": 0 00:23:23.059 } 00:23:23.059 ] 00:23:23.059 }' 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.059 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.626 [2024-12-06 13:17:29.942650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:23.626 [2024-12-06 13:17:29.943251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:23.626 [2024-12-06 13:17:29.943279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:23.626 [2024-12-06 13:17:29.943656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:23.626 BaseBdev4 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.626 [2024-12-06 13:17:29.950560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:23.626 [2024-12-06 13:17:29.950593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:23.626 [2024-12-06 13:17:29.950923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.626 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.626 [ 00:23:23.626 { 00:23:23.626 "name": "BaseBdev4", 00:23:23.626 "aliases": [ 00:23:23.626 "16afb9c5-422c-40ea-8ada-ac7d650593e6" 00:23:23.626 ], 00:23:23.626 "product_name": "Malloc disk", 00:23:23.626 "block_size": 512, 00:23:23.626 "num_blocks": 65536, 00:23:23.626 "uuid": "16afb9c5-422c-40ea-8ada-ac7d650593e6", 00:23:23.626 "assigned_rate_limits": { 00:23:23.626 "rw_ios_per_sec": 0, 00:23:23.626 "rw_mbytes_per_sec": 0, 00:23:23.626 "r_mbytes_per_sec": 0, 00:23:23.626 "w_mbytes_per_sec": 0 00:23:23.626 }, 00:23:23.626 "claimed": true, 00:23:23.626 "claim_type": "exclusive_write", 00:23:23.626 "zoned": false, 00:23:23.626 "supported_io_types": { 00:23:23.626 "read": true, 00:23:23.626 "write": true, 00:23:23.626 "unmap": true, 00:23:23.626 "flush": true, 00:23:23.626 "reset": true, 00:23:23.626 "nvme_admin": false, 00:23:23.626 "nvme_io": false, 00:23:23.626 "nvme_io_md": false, 00:23:23.626 "write_zeroes": true, 00:23:23.626 "zcopy": true, 00:23:23.626 "get_zone_info": false, 00:23:23.626 "zone_management": false, 00:23:23.626 "zone_append": false, 00:23:23.626 "compare": false, 00:23:23.626 "compare_and_write": false, 00:23:23.626 "abort": true, 00:23:23.626 "seek_hole": false, 00:23:23.626 "seek_data": false, 00:23:23.626 "copy": true, 00:23:23.626 "nvme_iov_md": false 00:23:23.626 }, 00:23:23.626 "memory_domains": [ 00:23:23.626 { 00:23:23.626 "dma_device_id": "system", 00:23:23.626 "dma_device_type": 1 00:23:23.626 }, 00:23:23.626 { 00:23:23.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.627 "dma_device_type": 2 00:23:23.627 } 00:23:23.627 ], 00:23:23.627 "driver_specific": {} 00:23:23.627 } 00:23:23.627 ] 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.627 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.627 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.627 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.627 "name": "Existed_Raid", 00:23:23.627 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:23.627 "strip_size_kb": 64, 00:23:23.627 "state": "online", 00:23:23.627 "raid_level": "raid5f", 00:23:23.627 "superblock": true, 00:23:23.627 "num_base_bdevs": 4, 00:23:23.627 "num_base_bdevs_discovered": 4, 00:23:23.627 "num_base_bdevs_operational": 4, 00:23:23.627 "base_bdevs_list": [ 00:23:23.627 { 00:23:23.627 "name": "BaseBdev1", 00:23:23.627 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:23.627 "is_configured": true, 00:23:23.627 "data_offset": 2048, 00:23:23.627 "data_size": 63488 00:23:23.627 }, 00:23:23.627 { 00:23:23.627 "name": "BaseBdev2", 00:23:23.627 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:23.627 "is_configured": true, 00:23:23.627 "data_offset": 2048, 00:23:23.627 "data_size": 63488 00:23:23.627 }, 00:23:23.627 { 00:23:23.627 "name": "BaseBdev3", 00:23:23.627 "uuid": "b546c5e4-cb48-453f-996b-86d9d3ac22a7", 00:23:23.627 "is_configured": true, 00:23:23.627 "data_offset": 2048, 00:23:23.627 "data_size": 63488 00:23:23.627 }, 00:23:23.627 { 00:23:23.627 "name": "BaseBdev4", 00:23:23.627 "uuid": "16afb9c5-422c-40ea-8ada-ac7d650593e6", 00:23:23.627 "is_configured": true, 00:23:23.627 "data_offset": 2048, 00:23:23.627 "data_size": 63488 00:23:23.627 } 00:23:23.627 ] 00:23:23.627 }' 00:23:23.627 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.627 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:24.192 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.193 [2024-12-06 13:17:30.506729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:24.193 "name": "Existed_Raid", 00:23:24.193 "aliases": [ 00:23:24.193 "033cb8a0-23e0-4d84-afba-e097ee7c12cd" 00:23:24.193 ], 00:23:24.193 "product_name": "Raid Volume", 00:23:24.193 "block_size": 512, 00:23:24.193 "num_blocks": 190464, 00:23:24.193 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:24.193 "assigned_rate_limits": { 00:23:24.193 "rw_ios_per_sec": 0, 00:23:24.193 "rw_mbytes_per_sec": 0, 00:23:24.193 "r_mbytes_per_sec": 0, 00:23:24.193 "w_mbytes_per_sec": 0 00:23:24.193 }, 00:23:24.193 "claimed": false, 00:23:24.193 "zoned": false, 00:23:24.193 "supported_io_types": { 00:23:24.193 "read": true, 00:23:24.193 "write": true, 00:23:24.193 "unmap": false, 00:23:24.193 "flush": false, 00:23:24.193 "reset": true, 00:23:24.193 "nvme_admin": false, 00:23:24.193 "nvme_io": false, 00:23:24.193 "nvme_io_md": false, 00:23:24.193 "write_zeroes": true, 00:23:24.193 "zcopy": false, 00:23:24.193 "get_zone_info": false, 00:23:24.193 "zone_management": false, 00:23:24.193 "zone_append": false, 00:23:24.193 "compare": false, 00:23:24.193 "compare_and_write": false, 00:23:24.193 "abort": false, 00:23:24.193 "seek_hole": false, 00:23:24.193 "seek_data": false, 00:23:24.193 "copy": false, 00:23:24.193 "nvme_iov_md": false 00:23:24.193 }, 00:23:24.193 "driver_specific": { 00:23:24.193 "raid": { 00:23:24.193 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:24.193 "strip_size_kb": 64, 00:23:24.193 "state": "online", 00:23:24.193 "raid_level": "raid5f", 00:23:24.193 "superblock": true, 00:23:24.193 "num_base_bdevs": 4, 00:23:24.193 "num_base_bdevs_discovered": 4, 00:23:24.193 "num_base_bdevs_operational": 4, 00:23:24.193 "base_bdevs_list": [ 00:23:24.193 { 00:23:24.193 "name": "BaseBdev1", 00:23:24.193 "uuid": "65297ac5-f588-45e3-8c33-297a1abf0e6f", 00:23:24.193 "is_configured": true, 00:23:24.193 "data_offset": 2048, 00:23:24.193 "data_size": 63488 00:23:24.193 }, 00:23:24.193 { 00:23:24.193 "name": "BaseBdev2", 00:23:24.193 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:24.193 "is_configured": true, 00:23:24.193 "data_offset": 2048, 00:23:24.193 "data_size": 63488 00:23:24.193 }, 00:23:24.193 { 00:23:24.193 "name": "BaseBdev3", 00:23:24.193 "uuid": "b546c5e4-cb48-453f-996b-86d9d3ac22a7", 00:23:24.193 "is_configured": true, 00:23:24.193 "data_offset": 2048, 00:23:24.193 "data_size": 63488 00:23:24.193 }, 00:23:24.193 { 00:23:24.193 "name": "BaseBdev4", 00:23:24.193 "uuid": "16afb9c5-422c-40ea-8ada-ac7d650593e6", 00:23:24.193 "is_configured": true, 00:23:24.193 "data_offset": 2048, 00:23:24.193 "data_size": 63488 00:23:24.193 } 00:23:24.193 ] 00:23:24.193 } 00:23:24.193 } 00:23:24.193 }' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:24.193 BaseBdev2 00:23:24.193 BaseBdev3 00:23:24.193 BaseBdev4' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.193 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.451 [2024-12-06 13:17:30.874694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.451 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.708 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.708 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.708 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.708 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.708 "name": "Existed_Raid", 00:23:24.708 "uuid": "033cb8a0-23e0-4d84-afba-e097ee7c12cd", 00:23:24.708 "strip_size_kb": 64, 00:23:24.708 "state": "online", 00:23:24.708 "raid_level": "raid5f", 00:23:24.708 "superblock": true, 00:23:24.708 "num_base_bdevs": 4, 00:23:24.708 "num_base_bdevs_discovered": 3, 00:23:24.708 "num_base_bdevs_operational": 3, 00:23:24.708 "base_bdevs_list": [ 00:23:24.708 { 00:23:24.708 "name": null, 00:23:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.708 "is_configured": false, 00:23:24.708 "data_offset": 0, 00:23:24.708 "data_size": 63488 00:23:24.708 }, 00:23:24.708 { 00:23:24.708 "name": "BaseBdev2", 00:23:24.709 "uuid": "6c33d90f-6a4f-4510-bf33-a846e114c7ae", 00:23:24.709 "is_configured": true, 00:23:24.709 "data_offset": 2048, 00:23:24.709 "data_size": 63488 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "name": "BaseBdev3", 00:23:24.709 "uuid": "b546c5e4-cb48-453f-996b-86d9d3ac22a7", 00:23:24.709 "is_configured": true, 00:23:24.709 "data_offset": 2048, 00:23:24.709 "data_size": 63488 00:23:24.709 }, 00:23:24.709 { 00:23:24.709 "name": "BaseBdev4", 00:23:24.709 "uuid": "16afb9c5-422c-40ea-8ada-ac7d650593e6", 00:23:24.709 "is_configured": true, 00:23:24.709 "data_offset": 2048, 00:23:24.709 "data_size": 63488 00:23:24.709 } 00:23:24.709 ] 00:23:24.709 }' 00:23:24.709 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.709 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.966 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.225 [2024-12-06 13:17:31.547302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:25.225 [2024-12-06 13:17:31.547702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:25.225 [2024-12-06 13:17:31.630036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.225 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.225 [2024-12-06 13:17:31.698092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:25.483 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.484 [2024-12-06 13:17:31.849470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:25.484 [2024-12-06 13:17:31.849549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.484 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 BaseBdev2 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 [ 00:23:25.743 { 00:23:25.743 "name": "BaseBdev2", 00:23:25.743 "aliases": [ 00:23:25.743 "5c662354-5080-4d2c-a62e-107791b4b898" 00:23:25.743 ], 00:23:25.743 "product_name": "Malloc disk", 00:23:25.743 "block_size": 512, 00:23:25.743 "num_blocks": 65536, 00:23:25.743 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:25.743 "assigned_rate_limits": { 00:23:25.743 "rw_ios_per_sec": 0, 00:23:25.743 "rw_mbytes_per_sec": 0, 00:23:25.743 "r_mbytes_per_sec": 0, 00:23:25.743 "w_mbytes_per_sec": 0 00:23:25.743 }, 00:23:25.743 "claimed": false, 00:23:25.743 "zoned": false, 00:23:25.743 "supported_io_types": { 00:23:25.743 "read": true, 00:23:25.743 "write": true, 00:23:25.743 "unmap": true, 00:23:25.743 "flush": true, 00:23:25.743 "reset": true, 00:23:25.743 "nvme_admin": false, 00:23:25.743 "nvme_io": false, 00:23:25.743 "nvme_io_md": false, 00:23:25.743 "write_zeroes": true, 00:23:25.743 "zcopy": true, 00:23:25.743 "get_zone_info": false, 00:23:25.743 "zone_management": false, 00:23:25.743 "zone_append": false, 00:23:25.743 "compare": false, 00:23:25.743 "compare_and_write": false, 00:23:25.743 "abort": true, 00:23:25.743 "seek_hole": false, 00:23:25.743 "seek_data": false, 00:23:25.743 "copy": true, 00:23:25.743 "nvme_iov_md": false 00:23:25.743 }, 00:23:25.743 "memory_domains": [ 00:23:25.743 { 00:23:25.743 "dma_device_id": "system", 00:23:25.743 "dma_device_type": 1 00:23:25.743 }, 00:23:25.743 { 00:23:25.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.743 "dma_device_type": 2 00:23:25.743 } 00:23:25.743 ], 00:23:25.743 "driver_specific": {} 00:23:25.743 } 00:23:25.743 ] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 BaseBdev3 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 [ 00:23:25.743 { 00:23:25.743 "name": "BaseBdev3", 00:23:25.743 "aliases": [ 00:23:25.743 "e58027a6-a818-42c0-b931-d58c7d56a657" 00:23:25.743 ], 00:23:25.743 "product_name": "Malloc disk", 00:23:25.743 "block_size": 512, 00:23:25.743 "num_blocks": 65536, 00:23:25.743 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:25.743 "assigned_rate_limits": { 00:23:25.743 "rw_ios_per_sec": 0, 00:23:25.743 "rw_mbytes_per_sec": 0, 00:23:25.743 "r_mbytes_per_sec": 0, 00:23:25.743 "w_mbytes_per_sec": 0 00:23:25.743 }, 00:23:25.743 "claimed": false, 00:23:25.743 "zoned": false, 00:23:25.743 "supported_io_types": { 00:23:25.743 "read": true, 00:23:25.743 "write": true, 00:23:25.743 "unmap": true, 00:23:25.743 "flush": true, 00:23:25.743 "reset": true, 00:23:25.743 "nvme_admin": false, 00:23:25.743 "nvme_io": false, 00:23:25.743 "nvme_io_md": false, 00:23:25.743 "write_zeroes": true, 00:23:25.743 "zcopy": true, 00:23:25.743 "get_zone_info": false, 00:23:25.743 "zone_management": false, 00:23:25.743 "zone_append": false, 00:23:25.743 "compare": false, 00:23:25.743 "compare_and_write": false, 00:23:25.743 "abort": true, 00:23:25.743 "seek_hole": false, 00:23:25.743 "seek_data": false, 00:23:25.743 "copy": true, 00:23:25.743 "nvme_iov_md": false 00:23:25.743 }, 00:23:25.743 "memory_domains": [ 00:23:25.743 { 00:23:25.743 "dma_device_id": "system", 00:23:25.743 "dma_device_type": 1 00:23:25.743 }, 00:23:25.743 { 00:23:25.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.743 "dma_device_type": 2 00:23:25.743 } 00:23:25.743 ], 00:23:25.743 "driver_specific": {} 00:23:25.743 } 00:23:25.743 ] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 BaseBdev4 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.743 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.743 [ 00:23:25.743 { 00:23:25.743 "name": "BaseBdev4", 00:23:25.743 "aliases": [ 00:23:25.743 "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5" 00:23:25.743 ], 00:23:25.743 "product_name": "Malloc disk", 00:23:25.743 "block_size": 512, 00:23:25.743 "num_blocks": 65536, 00:23:25.743 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:25.743 "assigned_rate_limits": { 00:23:25.743 "rw_ios_per_sec": 0, 00:23:25.743 "rw_mbytes_per_sec": 0, 00:23:25.743 "r_mbytes_per_sec": 0, 00:23:25.744 "w_mbytes_per_sec": 0 00:23:25.744 }, 00:23:25.744 "claimed": false, 00:23:25.744 "zoned": false, 00:23:25.744 "supported_io_types": { 00:23:25.744 "read": true, 00:23:25.744 "write": true, 00:23:25.744 "unmap": true, 00:23:25.744 "flush": true, 00:23:25.744 "reset": true, 00:23:25.744 "nvme_admin": false, 00:23:25.744 "nvme_io": false, 00:23:25.744 "nvme_io_md": false, 00:23:25.744 "write_zeroes": true, 00:23:25.744 "zcopy": true, 00:23:25.744 "get_zone_info": false, 00:23:25.744 "zone_management": false, 00:23:25.744 "zone_append": false, 00:23:25.744 "compare": false, 00:23:25.744 "compare_and_write": false, 00:23:25.744 "abort": true, 00:23:25.744 "seek_hole": false, 00:23:25.744 "seek_data": false, 00:23:25.744 "copy": true, 00:23:25.744 "nvme_iov_md": false 00:23:25.744 }, 00:23:25.744 "memory_domains": [ 00:23:25.744 { 00:23:25.744 "dma_device_id": "system", 00:23:25.744 "dma_device_type": 1 00:23:25.744 }, 00:23:25.744 { 00:23:25.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.744 "dma_device_type": 2 00:23:25.744 } 00:23:25.744 ], 00:23:25.744 "driver_specific": {} 00:23:25.744 } 00:23:25.744 ] 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.744 [2024-12-06 13:17:32.218563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:25.744 [2024-12-06 13:17:32.218764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:25.744 [2024-12-06 13:17:32.218819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:25.744 [2024-12-06 13:17:32.221446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:25.744 [2024-12-06 13:17:32.221594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.744 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.019 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.019 "name": "Existed_Raid", 00:23:26.019 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:26.019 "strip_size_kb": 64, 00:23:26.019 "state": "configuring", 00:23:26.019 "raid_level": "raid5f", 00:23:26.019 "superblock": true, 00:23:26.019 "num_base_bdevs": 4, 00:23:26.019 "num_base_bdevs_discovered": 3, 00:23:26.019 "num_base_bdevs_operational": 4, 00:23:26.019 "base_bdevs_list": [ 00:23:26.019 { 00:23:26.019 "name": "BaseBdev1", 00:23:26.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.019 "is_configured": false, 00:23:26.019 "data_offset": 0, 00:23:26.019 "data_size": 0 00:23:26.019 }, 00:23:26.019 { 00:23:26.019 "name": "BaseBdev2", 00:23:26.019 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:26.019 "is_configured": true, 00:23:26.019 "data_offset": 2048, 00:23:26.019 "data_size": 63488 00:23:26.019 }, 00:23:26.019 { 00:23:26.019 "name": "BaseBdev3", 00:23:26.019 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:26.019 "is_configured": true, 00:23:26.019 "data_offset": 2048, 00:23:26.019 "data_size": 63488 00:23:26.019 }, 00:23:26.019 { 00:23:26.019 "name": "BaseBdev4", 00:23:26.019 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:26.019 "is_configured": true, 00:23:26.019 "data_offset": 2048, 00:23:26.019 "data_size": 63488 00:23:26.019 } 00:23:26.019 ] 00:23:26.019 }' 00:23:26.019 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.019 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 [2024-12-06 13:17:32.738741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.279 "name": "Existed_Raid", 00:23:26.279 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:26.279 "strip_size_kb": 64, 00:23:26.279 "state": "configuring", 00:23:26.279 "raid_level": "raid5f", 00:23:26.279 "superblock": true, 00:23:26.279 "num_base_bdevs": 4, 00:23:26.279 "num_base_bdevs_discovered": 2, 00:23:26.279 "num_base_bdevs_operational": 4, 00:23:26.279 "base_bdevs_list": [ 00:23:26.279 { 00:23:26.279 "name": "BaseBdev1", 00:23:26.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.279 "is_configured": false, 00:23:26.279 "data_offset": 0, 00:23:26.279 "data_size": 0 00:23:26.279 }, 00:23:26.279 { 00:23:26.279 "name": null, 00:23:26.279 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:26.279 "is_configured": false, 00:23:26.279 "data_offset": 0, 00:23:26.279 "data_size": 63488 00:23:26.279 }, 00:23:26.279 { 00:23:26.279 "name": "BaseBdev3", 00:23:26.279 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:26.279 "is_configured": true, 00:23:26.279 "data_offset": 2048, 00:23:26.279 "data_size": 63488 00:23:26.279 }, 00:23:26.279 { 00:23:26.279 "name": "BaseBdev4", 00:23:26.279 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:26.279 "is_configured": true, 00:23:26.279 "data_offset": 2048, 00:23:26.279 "data_size": 63488 00:23:26.279 } 00:23:26.279 ] 00:23:26.279 }' 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.279 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.845 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.846 [2024-12-06 13:17:33.309590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.846 BaseBdev1 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.846 [ 00:23:26.846 { 00:23:26.846 "name": "BaseBdev1", 00:23:26.846 "aliases": [ 00:23:26.846 "ecc94558-e501-4f02-8bf3-213f3772e633" 00:23:26.846 ], 00:23:26.846 "product_name": "Malloc disk", 00:23:26.846 "block_size": 512, 00:23:26.846 "num_blocks": 65536, 00:23:26.846 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:26.846 "assigned_rate_limits": { 00:23:26.846 "rw_ios_per_sec": 0, 00:23:26.846 "rw_mbytes_per_sec": 0, 00:23:26.846 "r_mbytes_per_sec": 0, 00:23:26.846 "w_mbytes_per_sec": 0 00:23:26.846 }, 00:23:26.846 "claimed": true, 00:23:26.846 "claim_type": "exclusive_write", 00:23:26.846 "zoned": false, 00:23:26.846 "supported_io_types": { 00:23:26.846 "read": true, 00:23:26.846 "write": true, 00:23:26.846 "unmap": true, 00:23:26.846 "flush": true, 00:23:26.846 "reset": true, 00:23:26.846 "nvme_admin": false, 00:23:26.846 "nvme_io": false, 00:23:26.846 "nvme_io_md": false, 00:23:26.846 "write_zeroes": true, 00:23:26.846 "zcopy": true, 00:23:26.846 "get_zone_info": false, 00:23:26.846 "zone_management": false, 00:23:26.846 "zone_append": false, 00:23:26.846 "compare": false, 00:23:26.846 "compare_and_write": false, 00:23:26.846 "abort": true, 00:23:26.846 "seek_hole": false, 00:23:26.846 "seek_data": false, 00:23:26.846 "copy": true, 00:23:26.846 "nvme_iov_md": false 00:23:26.846 }, 00:23:26.846 "memory_domains": [ 00:23:26.846 { 00:23:26.846 "dma_device_id": "system", 00:23:26.846 "dma_device_type": 1 00:23:26.846 }, 00:23:26.846 { 00:23:26.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.846 "dma_device_type": 2 00:23:26.846 } 00:23:26.846 ], 00:23:26.846 "driver_specific": {} 00:23:26.846 } 00:23:26.846 ] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.846 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.104 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.104 "name": "Existed_Raid", 00:23:27.104 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:27.104 "strip_size_kb": 64, 00:23:27.104 "state": "configuring", 00:23:27.104 "raid_level": "raid5f", 00:23:27.104 "superblock": true, 00:23:27.105 "num_base_bdevs": 4, 00:23:27.105 "num_base_bdevs_discovered": 3, 00:23:27.105 "num_base_bdevs_operational": 4, 00:23:27.105 "base_bdevs_list": [ 00:23:27.105 { 00:23:27.105 "name": "BaseBdev1", 00:23:27.105 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:27.105 "is_configured": true, 00:23:27.105 "data_offset": 2048, 00:23:27.105 "data_size": 63488 00:23:27.105 }, 00:23:27.105 { 00:23:27.105 "name": null, 00:23:27.105 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:27.105 "is_configured": false, 00:23:27.105 "data_offset": 0, 00:23:27.105 "data_size": 63488 00:23:27.105 }, 00:23:27.105 { 00:23:27.105 "name": "BaseBdev3", 00:23:27.105 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:27.105 "is_configured": true, 00:23:27.105 "data_offset": 2048, 00:23:27.105 "data_size": 63488 00:23:27.105 }, 00:23:27.105 { 00:23:27.105 "name": "BaseBdev4", 00:23:27.105 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:27.105 "is_configured": true, 00:23:27.105 "data_offset": 2048, 00:23:27.105 "data_size": 63488 00:23:27.105 } 00:23:27.105 ] 00:23:27.105 }' 00:23:27.105 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.105 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.672 [2024-12-06 13:17:33.973846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.672 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.672 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.672 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.672 "name": "Existed_Raid", 00:23:27.672 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:27.672 "strip_size_kb": 64, 00:23:27.672 "state": "configuring", 00:23:27.672 "raid_level": "raid5f", 00:23:27.672 "superblock": true, 00:23:27.672 "num_base_bdevs": 4, 00:23:27.672 "num_base_bdevs_discovered": 2, 00:23:27.672 "num_base_bdevs_operational": 4, 00:23:27.672 "base_bdevs_list": [ 00:23:27.672 { 00:23:27.672 "name": "BaseBdev1", 00:23:27.672 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:27.672 "is_configured": true, 00:23:27.672 "data_offset": 2048, 00:23:27.672 "data_size": 63488 00:23:27.672 }, 00:23:27.672 { 00:23:27.672 "name": null, 00:23:27.672 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:27.672 "is_configured": false, 00:23:27.672 "data_offset": 0, 00:23:27.672 "data_size": 63488 00:23:27.672 }, 00:23:27.672 { 00:23:27.672 "name": null, 00:23:27.672 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:27.672 "is_configured": false, 00:23:27.672 "data_offset": 0, 00:23:27.672 "data_size": 63488 00:23:27.672 }, 00:23:27.672 { 00:23:27.672 "name": "BaseBdev4", 00:23:27.672 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:27.672 "is_configured": true, 00:23:27.672 "data_offset": 2048, 00:23:27.672 "data_size": 63488 00:23:27.672 } 00:23:27.672 ] 00:23:27.672 }' 00:23:27.672 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.672 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 [2024-12-06 13:17:34.614042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.240 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.240 "name": "Existed_Raid", 00:23:28.240 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:28.240 "strip_size_kb": 64, 00:23:28.240 "state": "configuring", 00:23:28.240 "raid_level": "raid5f", 00:23:28.240 "superblock": true, 00:23:28.240 "num_base_bdevs": 4, 00:23:28.240 "num_base_bdevs_discovered": 3, 00:23:28.240 "num_base_bdevs_operational": 4, 00:23:28.240 "base_bdevs_list": [ 00:23:28.240 { 00:23:28.240 "name": "BaseBdev1", 00:23:28.240 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:28.240 "is_configured": true, 00:23:28.240 "data_offset": 2048, 00:23:28.240 "data_size": 63488 00:23:28.240 }, 00:23:28.240 { 00:23:28.240 "name": null, 00:23:28.240 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:28.240 "is_configured": false, 00:23:28.240 "data_offset": 0, 00:23:28.240 "data_size": 63488 00:23:28.240 }, 00:23:28.240 { 00:23:28.240 "name": "BaseBdev3", 00:23:28.240 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:28.240 "is_configured": true, 00:23:28.240 "data_offset": 2048, 00:23:28.240 "data_size": 63488 00:23:28.240 }, 00:23:28.240 { 00:23:28.240 "name": "BaseBdev4", 00:23:28.241 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:28.241 "is_configured": true, 00:23:28.241 "data_offset": 2048, 00:23:28.241 "data_size": 63488 00:23:28.241 } 00:23:28.241 ] 00:23:28.241 }' 00:23:28.241 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.241 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.834 [2024-12-06 13:17:35.182275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.834 "name": "Existed_Raid", 00:23:28.834 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:28.834 "strip_size_kb": 64, 00:23:28.834 "state": "configuring", 00:23:28.834 "raid_level": "raid5f", 00:23:28.834 "superblock": true, 00:23:28.834 "num_base_bdevs": 4, 00:23:28.834 "num_base_bdevs_discovered": 2, 00:23:28.834 "num_base_bdevs_operational": 4, 00:23:28.834 "base_bdevs_list": [ 00:23:28.834 { 00:23:28.834 "name": null, 00:23:28.834 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:28.834 "is_configured": false, 00:23:28.834 "data_offset": 0, 00:23:28.834 "data_size": 63488 00:23:28.834 }, 00:23:28.834 { 00:23:28.834 "name": null, 00:23:28.834 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:28.834 "is_configured": false, 00:23:28.834 "data_offset": 0, 00:23:28.834 "data_size": 63488 00:23:28.834 }, 00:23:28.834 { 00:23:28.834 "name": "BaseBdev3", 00:23:28.834 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:28.834 "is_configured": true, 00:23:28.834 "data_offset": 2048, 00:23:28.834 "data_size": 63488 00:23:28.834 }, 00:23:28.834 { 00:23:28.834 "name": "BaseBdev4", 00:23:28.834 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:28.834 "is_configured": true, 00:23:28.834 "data_offset": 2048, 00:23:28.834 "data_size": 63488 00:23:28.834 } 00:23:28.834 ] 00:23:28.834 }' 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.834 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 [2024-12-06 13:17:35.878605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.659 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.659 "name": "Existed_Raid", 00:23:29.659 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:29.659 "strip_size_kb": 64, 00:23:29.659 "state": "configuring", 00:23:29.659 "raid_level": "raid5f", 00:23:29.659 "superblock": true, 00:23:29.659 "num_base_bdevs": 4, 00:23:29.659 "num_base_bdevs_discovered": 3, 00:23:29.659 "num_base_bdevs_operational": 4, 00:23:29.659 "base_bdevs_list": [ 00:23:29.659 { 00:23:29.659 "name": null, 00:23:29.659 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:29.659 "is_configured": false, 00:23:29.659 "data_offset": 0, 00:23:29.659 "data_size": 63488 00:23:29.659 }, 00:23:29.659 { 00:23:29.659 "name": "BaseBdev2", 00:23:29.659 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:29.659 "is_configured": true, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 }, 00:23:29.659 { 00:23:29.659 "name": "BaseBdev3", 00:23:29.659 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:29.659 "is_configured": true, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 }, 00:23:29.659 { 00:23:29.659 "name": "BaseBdev4", 00:23:29.659 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:29.659 "is_configured": true, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 } 00:23:29.659 ] 00:23:29.659 }' 00:23:29.659 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.659 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.975 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ecc94558-e501-4f02-8bf3-213f3772e633 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 [2024-12-06 13:17:36.540238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:30.250 [2024-12-06 13:17:36.540584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:30.250 [2024-12-06 13:17:36.540606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:30.250 [2024-12-06 13:17:36.540930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:30.250 NewBaseBdev 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 [2024-12-06 13:17:36.547739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:30.250 [2024-12-06 13:17:36.547772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:30.250 [2024-12-06 13:17:36.548069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.250 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 [ 00:23:30.250 { 00:23:30.250 "name": "NewBaseBdev", 00:23:30.250 "aliases": [ 00:23:30.250 "ecc94558-e501-4f02-8bf3-213f3772e633" 00:23:30.250 ], 00:23:30.250 "product_name": "Malloc disk", 00:23:30.250 "block_size": 512, 00:23:30.250 "num_blocks": 65536, 00:23:30.250 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:30.250 "assigned_rate_limits": { 00:23:30.250 "rw_ios_per_sec": 0, 00:23:30.250 "rw_mbytes_per_sec": 0, 00:23:30.250 "r_mbytes_per_sec": 0, 00:23:30.250 "w_mbytes_per_sec": 0 00:23:30.250 }, 00:23:30.250 "claimed": true, 00:23:30.250 "claim_type": "exclusive_write", 00:23:30.250 "zoned": false, 00:23:30.250 "supported_io_types": { 00:23:30.250 "read": true, 00:23:30.250 "write": true, 00:23:30.250 "unmap": true, 00:23:30.250 "flush": true, 00:23:30.250 "reset": true, 00:23:30.250 "nvme_admin": false, 00:23:30.250 "nvme_io": false, 00:23:30.250 "nvme_io_md": false, 00:23:30.250 "write_zeroes": true, 00:23:30.250 "zcopy": true, 00:23:30.250 "get_zone_info": false, 00:23:30.250 "zone_management": false, 00:23:30.250 "zone_append": false, 00:23:30.250 "compare": false, 00:23:30.250 "compare_and_write": false, 00:23:30.250 "abort": true, 00:23:30.250 "seek_hole": false, 00:23:30.250 "seek_data": false, 00:23:30.250 "copy": true, 00:23:30.250 "nvme_iov_md": false 00:23:30.250 }, 00:23:30.250 "memory_domains": [ 00:23:30.250 { 00:23:30.251 "dma_device_id": "system", 00:23:30.251 "dma_device_type": 1 00:23:30.251 }, 00:23:30.251 { 00:23:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.251 "dma_device_type": 2 00:23:30.251 } 00:23:30.251 ], 00:23:30.251 "driver_specific": {} 00:23:30.251 } 00:23:30.251 ] 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.251 "name": "Existed_Raid", 00:23:30.251 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:30.251 "strip_size_kb": 64, 00:23:30.251 "state": "online", 00:23:30.251 "raid_level": "raid5f", 00:23:30.251 "superblock": true, 00:23:30.251 "num_base_bdevs": 4, 00:23:30.251 "num_base_bdevs_discovered": 4, 00:23:30.251 "num_base_bdevs_operational": 4, 00:23:30.251 "base_bdevs_list": [ 00:23:30.251 { 00:23:30.251 "name": "NewBaseBdev", 00:23:30.251 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:30.251 "is_configured": true, 00:23:30.251 "data_offset": 2048, 00:23:30.251 "data_size": 63488 00:23:30.251 }, 00:23:30.251 { 00:23:30.251 "name": "BaseBdev2", 00:23:30.251 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:30.251 "is_configured": true, 00:23:30.251 "data_offset": 2048, 00:23:30.251 "data_size": 63488 00:23:30.251 }, 00:23:30.251 { 00:23:30.251 "name": "BaseBdev3", 00:23:30.251 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:30.251 "is_configured": true, 00:23:30.251 "data_offset": 2048, 00:23:30.251 "data_size": 63488 00:23:30.251 }, 00:23:30.251 { 00:23:30.251 "name": "BaseBdev4", 00:23:30.251 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:30.251 "is_configured": true, 00:23:30.251 "data_offset": 2048, 00:23:30.251 "data_size": 63488 00:23:30.251 } 00:23:30.251 ] 00:23:30.251 }' 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.251 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.818 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:30.818 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.819 [2024-12-06 13:17:37.123897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.819 "name": "Existed_Raid", 00:23:30.819 "aliases": [ 00:23:30.819 "f492b05a-6b50-4c6b-a9db-7827231b8c0c" 00:23:30.819 ], 00:23:30.819 "product_name": "Raid Volume", 00:23:30.819 "block_size": 512, 00:23:30.819 "num_blocks": 190464, 00:23:30.819 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:30.819 "assigned_rate_limits": { 00:23:30.819 "rw_ios_per_sec": 0, 00:23:30.819 "rw_mbytes_per_sec": 0, 00:23:30.819 "r_mbytes_per_sec": 0, 00:23:30.819 "w_mbytes_per_sec": 0 00:23:30.819 }, 00:23:30.819 "claimed": false, 00:23:30.819 "zoned": false, 00:23:30.819 "supported_io_types": { 00:23:30.819 "read": true, 00:23:30.819 "write": true, 00:23:30.819 "unmap": false, 00:23:30.819 "flush": false, 00:23:30.819 "reset": true, 00:23:30.819 "nvme_admin": false, 00:23:30.819 "nvme_io": false, 00:23:30.819 "nvme_io_md": false, 00:23:30.819 "write_zeroes": true, 00:23:30.819 "zcopy": false, 00:23:30.819 "get_zone_info": false, 00:23:30.819 "zone_management": false, 00:23:30.819 "zone_append": false, 00:23:30.819 "compare": false, 00:23:30.819 "compare_and_write": false, 00:23:30.819 "abort": false, 00:23:30.819 "seek_hole": false, 00:23:30.819 "seek_data": false, 00:23:30.819 "copy": false, 00:23:30.819 "nvme_iov_md": false 00:23:30.819 }, 00:23:30.819 "driver_specific": { 00:23:30.819 "raid": { 00:23:30.819 "uuid": "f492b05a-6b50-4c6b-a9db-7827231b8c0c", 00:23:30.819 "strip_size_kb": 64, 00:23:30.819 "state": "online", 00:23:30.819 "raid_level": "raid5f", 00:23:30.819 "superblock": true, 00:23:30.819 "num_base_bdevs": 4, 00:23:30.819 "num_base_bdevs_discovered": 4, 00:23:30.819 "num_base_bdevs_operational": 4, 00:23:30.819 "base_bdevs_list": [ 00:23:30.819 { 00:23:30.819 "name": "NewBaseBdev", 00:23:30.819 "uuid": "ecc94558-e501-4f02-8bf3-213f3772e633", 00:23:30.819 "is_configured": true, 00:23:30.819 "data_offset": 2048, 00:23:30.819 "data_size": 63488 00:23:30.819 }, 00:23:30.819 { 00:23:30.819 "name": "BaseBdev2", 00:23:30.819 "uuid": "5c662354-5080-4d2c-a62e-107791b4b898", 00:23:30.819 "is_configured": true, 00:23:30.819 "data_offset": 2048, 00:23:30.819 "data_size": 63488 00:23:30.819 }, 00:23:30.819 { 00:23:30.819 "name": "BaseBdev3", 00:23:30.819 "uuid": "e58027a6-a818-42c0-b931-d58c7d56a657", 00:23:30.819 "is_configured": true, 00:23:30.819 "data_offset": 2048, 00:23:30.819 "data_size": 63488 00:23:30.819 }, 00:23:30.819 { 00:23:30.819 "name": "BaseBdev4", 00:23:30.819 "uuid": "fce2929f-b5db-4fea-9f5a-9cb5e167bfe5", 00:23:30.819 "is_configured": true, 00:23:30.819 "data_offset": 2048, 00:23:30.819 "data_size": 63488 00:23:30.819 } 00:23:30.819 ] 00:23:30.819 } 00:23:30.819 } 00:23:30.819 }' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:30.819 BaseBdev2 00:23:30.819 BaseBdev3 00:23:30.819 BaseBdev4' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.819 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.078 [2024-12-06 13:17:37.491677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.078 [2024-12-06 13:17:37.491717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.078 [2024-12-06 13:17:37.491827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.078 [2024-12-06 13:17:37.492204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.078 [2024-12-06 13:17:37.492223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84183 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84183 ']' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84183 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84183 00:23:31.078 killing process with pid 84183 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84183' 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84183 00:23:31.078 [2024-12-06 13:17:37.536816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:31.078 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84183 00:23:31.647 [2024-12-06 13:17:37.904175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.583 13:17:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:32.583 00:23:32.583 real 0m13.023s 00:23:32.583 user 0m21.483s 00:23:32.583 sys 0m1.925s 00:23:32.583 13:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.583 13:17:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 ************************************ 00:23:32.583 END TEST raid5f_state_function_test_sb 00:23:32.583 ************************************ 00:23:32.583 13:17:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:32.583 13:17:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:32.583 13:17:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.583 13:17:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 ************************************ 00:23:32.583 START TEST raid5f_superblock_test 00:23:32.583 ************************************ 00:23:32.583 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84861 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84861 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84861 ']' 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.584 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.843 [2024-12-06 13:17:39.156352] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:32.843 [2024-12-06 13:17:39.156547] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84861 ] 00:23:32.843 [2024-12-06 13:17:39.342730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.102 [2024-12-06 13:17:39.507772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.390 [2024-12-06 13:17:39.737656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.390 [2024-12-06 13:17:39.737713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.958 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.958 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:33.958 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:33.958 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 malloc1 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 [2024-12-06 13:17:40.235796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:33.959 [2024-12-06 13:17:40.235881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.959 [2024-12-06 13:17:40.235916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:33.959 [2024-12-06 13:17:40.235931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.959 [2024-12-06 13:17:40.238963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.959 [2024-12-06 13:17:40.239010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:33.959 pt1 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 malloc2 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 [2024-12-06 13:17:40.293942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.959 [2024-12-06 13:17:40.294012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.959 [2024-12-06 13:17:40.294047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:33.959 [2024-12-06 13:17:40.294062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.959 [2024-12-06 13:17:40.297240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.959 [2024-12-06 13:17:40.297285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.959 pt2 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 malloc3 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 [2024-12-06 13:17:40.363768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:33.959 [2024-12-06 13:17:40.363835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.959 [2024-12-06 13:17:40.363867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:33.959 [2024-12-06 13:17:40.363881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.959 [2024-12-06 13:17:40.366669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.959 [2024-12-06 13:17:40.366712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:33.959 pt3 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 malloc4 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 [2024-12-06 13:17:40.420384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:33.959 [2024-12-06 13:17:40.420480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.959 [2024-12-06 13:17:40.420513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:33.959 [2024-12-06 13:17:40.420528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.959 [2024-12-06 13:17:40.423306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.959 [2024-12-06 13:17:40.423346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:33.959 pt4 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.959 [2024-12-06 13:17:40.432405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:33.959 [2024-12-06 13:17:40.434803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.959 [2024-12-06 13:17:40.434930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:33.959 [2024-12-06 13:17:40.435004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:33.959 [2024-12-06 13:17:40.435270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:33.959 [2024-12-06 13:17:40.435303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:33.959 [2024-12-06 13:17:40.435635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:33.959 [2024-12-06 13:17:40.442424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:33.959 [2024-12-06 13:17:40.442468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:33.959 [2024-12-06 13:17:40.442704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.959 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.960 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.219 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.219 "name": "raid_bdev1", 00:23:34.219 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:34.219 "strip_size_kb": 64, 00:23:34.219 "state": "online", 00:23:34.219 "raid_level": "raid5f", 00:23:34.219 "superblock": true, 00:23:34.219 "num_base_bdevs": 4, 00:23:34.219 "num_base_bdevs_discovered": 4, 00:23:34.219 "num_base_bdevs_operational": 4, 00:23:34.219 "base_bdevs_list": [ 00:23:34.219 { 00:23:34.219 "name": "pt1", 00:23:34.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.219 "is_configured": true, 00:23:34.219 "data_offset": 2048, 00:23:34.219 "data_size": 63488 00:23:34.219 }, 00:23:34.219 { 00:23:34.219 "name": "pt2", 00:23:34.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.219 "is_configured": true, 00:23:34.219 "data_offset": 2048, 00:23:34.219 "data_size": 63488 00:23:34.219 }, 00:23:34.219 { 00:23:34.219 "name": "pt3", 00:23:34.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.219 "is_configured": true, 00:23:34.219 "data_offset": 2048, 00:23:34.219 "data_size": 63488 00:23:34.219 }, 00:23:34.219 { 00:23:34.219 "name": "pt4", 00:23:34.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:34.219 "is_configured": true, 00:23:34.219 "data_offset": 2048, 00:23:34.219 "data_size": 63488 00:23:34.219 } 00:23:34.219 ] 00:23:34.219 }' 00:23:34.219 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.219 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.478 [2024-12-06 13:17:40.971139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.478 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.738 "name": "raid_bdev1", 00:23:34.738 "aliases": [ 00:23:34.738 "45d116c4-7b16-4be5-ae8b-b3c106d085ba" 00:23:34.738 ], 00:23:34.738 "product_name": "Raid Volume", 00:23:34.738 "block_size": 512, 00:23:34.738 "num_blocks": 190464, 00:23:34.738 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:34.738 "assigned_rate_limits": { 00:23:34.738 "rw_ios_per_sec": 0, 00:23:34.738 "rw_mbytes_per_sec": 0, 00:23:34.738 "r_mbytes_per_sec": 0, 00:23:34.738 "w_mbytes_per_sec": 0 00:23:34.738 }, 00:23:34.738 "claimed": false, 00:23:34.738 "zoned": false, 00:23:34.738 "supported_io_types": { 00:23:34.738 "read": true, 00:23:34.738 "write": true, 00:23:34.738 "unmap": false, 00:23:34.738 "flush": false, 00:23:34.738 "reset": true, 00:23:34.738 "nvme_admin": false, 00:23:34.738 "nvme_io": false, 00:23:34.738 "nvme_io_md": false, 00:23:34.738 "write_zeroes": true, 00:23:34.738 "zcopy": false, 00:23:34.738 "get_zone_info": false, 00:23:34.738 "zone_management": false, 00:23:34.738 "zone_append": false, 00:23:34.738 "compare": false, 00:23:34.738 "compare_and_write": false, 00:23:34.738 "abort": false, 00:23:34.738 "seek_hole": false, 00:23:34.738 "seek_data": false, 00:23:34.738 "copy": false, 00:23:34.738 "nvme_iov_md": false 00:23:34.738 }, 00:23:34.738 "driver_specific": { 00:23:34.738 "raid": { 00:23:34.738 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:34.738 "strip_size_kb": 64, 00:23:34.738 "state": "online", 00:23:34.738 "raid_level": "raid5f", 00:23:34.738 "superblock": true, 00:23:34.738 "num_base_bdevs": 4, 00:23:34.738 "num_base_bdevs_discovered": 4, 00:23:34.738 "num_base_bdevs_operational": 4, 00:23:34.738 "base_bdevs_list": [ 00:23:34.738 { 00:23:34.738 "name": "pt1", 00:23:34.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.738 "is_configured": true, 00:23:34.738 "data_offset": 2048, 00:23:34.738 "data_size": 63488 00:23:34.738 }, 00:23:34.738 { 00:23:34.738 "name": "pt2", 00:23:34.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.738 "is_configured": true, 00:23:34.738 "data_offset": 2048, 00:23:34.738 "data_size": 63488 00:23:34.738 }, 00:23:34.738 { 00:23:34.738 "name": "pt3", 00:23:34.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:34.738 "is_configured": true, 00:23:34.738 "data_offset": 2048, 00:23:34.738 "data_size": 63488 00:23:34.738 }, 00:23:34.738 { 00:23:34.738 "name": "pt4", 00:23:34.738 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:34.738 "is_configured": true, 00:23:34.738 "data_offset": 2048, 00:23:34.738 "data_size": 63488 00:23:34.738 } 00:23:34.738 ] 00:23:34.738 } 00:23:34.738 } 00:23:34.738 }' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:34.738 pt2 00:23:34.738 pt3 00:23:34.738 pt4' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.738 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:34.998 [2024-12-06 13:17:41.287157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=45d116c4-7b16-4be5-ae8b-b3c106d085ba 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 45d116c4-7b16-4be5-ae8b-b3c106d085ba ']' 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.998 [2024-12-06 13:17:41.334955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:34.998 [2024-12-06 13:17:41.334985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:34.998 [2024-12-06 13:17:41.335076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.998 [2024-12-06 13:17:41.335186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:34.998 [2024-12-06 13:17:41.335209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:34.998 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 [2024-12-06 13:17:41.495072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:34.999 [2024-12-06 13:17:41.497509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:34.999 [2024-12-06 13:17:41.497590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:34.999 [2024-12-06 13:17:41.497646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:34.999 [2024-12-06 13:17:41.497724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:34.999 [2024-12-06 13:17:41.497792] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:34.999 [2024-12-06 13:17:41.497824] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:34.999 [2024-12-06 13:17:41.497853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:34.999 [2024-12-06 13:17:41.497874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:34.999 [2024-12-06 13:17:41.497889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:34.999 request: 00:23:34.999 { 00:23:34.999 "name": "raid_bdev1", 00:23:34.999 "raid_level": "raid5f", 00:23:34.999 "base_bdevs": [ 00:23:34.999 "malloc1", 00:23:34.999 "malloc2", 00:23:34.999 "malloc3", 00:23:34.999 "malloc4" 00:23:34.999 ], 00:23:34.999 "strip_size_kb": 64, 00:23:34.999 "superblock": false, 00:23:34.999 "method": "bdev_raid_create", 00:23:34.999 "req_id": 1 00:23:34.999 } 00:23:34.999 Got JSON-RPC error response 00:23:34.999 response: 00:23:34.999 { 00:23:34.999 "code": -17, 00:23:34.999 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:34.999 } 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.999 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 [2024-12-06 13:17:41.559053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:35.259 [2024-12-06 13:17:41.559127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.259 [2024-12-06 13:17:41.559154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:35.259 [2024-12-06 13:17:41.559170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.259 [2024-12-06 13:17:41.561978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.259 [2024-12-06 13:17:41.562025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:35.259 [2024-12-06 13:17:41.562131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:35.259 [2024-12-06 13:17:41.562202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:35.259 pt1 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.259 "name": "raid_bdev1", 00:23:35.259 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:35.259 "strip_size_kb": 64, 00:23:35.259 "state": "configuring", 00:23:35.259 "raid_level": "raid5f", 00:23:35.259 "superblock": true, 00:23:35.259 "num_base_bdevs": 4, 00:23:35.259 "num_base_bdevs_discovered": 1, 00:23:35.259 "num_base_bdevs_operational": 4, 00:23:35.259 "base_bdevs_list": [ 00:23:35.259 { 00:23:35.259 "name": "pt1", 00:23:35.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:35.259 "is_configured": true, 00:23:35.259 "data_offset": 2048, 00:23:35.259 "data_size": 63488 00:23:35.259 }, 00:23:35.259 { 00:23:35.259 "name": null, 00:23:35.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.259 "is_configured": false, 00:23:35.259 "data_offset": 2048, 00:23:35.259 "data_size": 63488 00:23:35.259 }, 00:23:35.259 { 00:23:35.259 "name": null, 00:23:35.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:35.259 "is_configured": false, 00:23:35.259 "data_offset": 2048, 00:23:35.259 "data_size": 63488 00:23:35.259 }, 00:23:35.259 { 00:23:35.259 "name": null, 00:23:35.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:35.259 "is_configured": false, 00:23:35.259 "data_offset": 2048, 00:23:35.259 "data_size": 63488 00:23:35.259 } 00:23:35.259 ] 00:23:35.259 }' 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.259 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.826 [2024-12-06 13:17:42.079210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.826 [2024-12-06 13:17:42.079296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.826 [2024-12-06 13:17:42.079325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:35.826 [2024-12-06 13:17:42.079342] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.826 [2024-12-06 13:17:42.079931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.826 [2024-12-06 13:17:42.079968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.826 [2024-12-06 13:17:42.080073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:35.826 [2024-12-06 13:17:42.080111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.826 pt2 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.826 [2024-12-06 13:17:42.087192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.826 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.826 "name": "raid_bdev1", 00:23:35.826 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:35.826 "strip_size_kb": 64, 00:23:35.826 "state": "configuring", 00:23:35.826 "raid_level": "raid5f", 00:23:35.826 "superblock": true, 00:23:35.826 "num_base_bdevs": 4, 00:23:35.826 "num_base_bdevs_discovered": 1, 00:23:35.826 "num_base_bdevs_operational": 4, 00:23:35.826 "base_bdevs_list": [ 00:23:35.826 { 00:23:35.826 "name": "pt1", 00:23:35.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:35.826 "is_configured": true, 00:23:35.826 "data_offset": 2048, 00:23:35.826 "data_size": 63488 00:23:35.826 }, 00:23:35.826 { 00:23:35.826 "name": null, 00:23:35.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.826 "is_configured": false, 00:23:35.826 "data_offset": 0, 00:23:35.826 "data_size": 63488 00:23:35.826 }, 00:23:35.826 { 00:23:35.826 "name": null, 00:23:35.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:35.826 "is_configured": false, 00:23:35.826 "data_offset": 2048, 00:23:35.826 "data_size": 63488 00:23:35.826 }, 00:23:35.827 { 00:23:35.827 "name": null, 00:23:35.827 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:35.827 "is_configured": false, 00:23:35.827 "data_offset": 2048, 00:23:35.827 "data_size": 63488 00:23:35.827 } 00:23:35.827 ] 00:23:35.827 }' 00:23:35.827 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.827 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.085 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:36.085 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.085 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.086 [2024-12-06 13:17:42.555350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:36.086 [2024-12-06 13:17:42.555441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.086 [2024-12-06 13:17:42.555504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:36.086 [2024-12-06 13:17:42.555521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.086 [2024-12-06 13:17:42.556114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.086 [2024-12-06 13:17:42.556144] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:36.086 [2024-12-06 13:17:42.556251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:36.086 [2024-12-06 13:17:42.556289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:36.086 pt2 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.086 [2024-12-06 13:17:42.563327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:36.086 [2024-12-06 13:17:42.563391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.086 [2024-12-06 13:17:42.563415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:36.086 [2024-12-06 13:17:42.563426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.086 [2024-12-06 13:17:42.563935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.086 [2024-12-06 13:17:42.563970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:36.086 [2024-12-06 13:17:42.564050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:36.086 [2024-12-06 13:17:42.564085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:36.086 pt3 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.086 [2024-12-06 13:17:42.571281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:36.086 [2024-12-06 13:17:42.571327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.086 [2024-12-06 13:17:42.571351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:36.086 [2024-12-06 13:17:42.571364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.086 [2024-12-06 13:17:42.571845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.086 [2024-12-06 13:17:42.571875] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:36.086 [2024-12-06 13:17:42.571956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:36.086 [2024-12-06 13:17:42.571994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:36.086 [2024-12-06 13:17:42.572169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:36.086 [2024-12-06 13:17:42.572191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:36.086 [2024-12-06 13:17:42.572509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:36.086 [2024-12-06 13:17:42.579074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:36.086 [2024-12-06 13:17:42.579109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:36.086 [2024-12-06 13:17:42.579323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.086 pt4 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.086 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.345 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.345 "name": "raid_bdev1", 00:23:36.345 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:36.345 "strip_size_kb": 64, 00:23:36.345 "state": "online", 00:23:36.345 "raid_level": "raid5f", 00:23:36.345 "superblock": true, 00:23:36.345 "num_base_bdevs": 4, 00:23:36.345 "num_base_bdevs_discovered": 4, 00:23:36.345 "num_base_bdevs_operational": 4, 00:23:36.345 "base_bdevs_list": [ 00:23:36.345 { 00:23:36.345 "name": "pt1", 00:23:36.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:36.345 "is_configured": true, 00:23:36.345 "data_offset": 2048, 00:23:36.345 "data_size": 63488 00:23:36.345 }, 00:23:36.345 { 00:23:36.345 "name": "pt2", 00:23:36.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.345 "is_configured": true, 00:23:36.345 "data_offset": 2048, 00:23:36.345 "data_size": 63488 00:23:36.345 }, 00:23:36.345 { 00:23:36.345 "name": "pt3", 00:23:36.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.345 "is_configured": true, 00:23:36.345 "data_offset": 2048, 00:23:36.345 "data_size": 63488 00:23:36.345 }, 00:23:36.345 { 00:23:36.345 "name": "pt4", 00:23:36.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:36.345 "is_configured": true, 00:23:36.345 "data_offset": 2048, 00:23:36.345 "data_size": 63488 00:23:36.345 } 00:23:36.345 ] 00:23:36.345 }' 00:23:36.345 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.345 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.604 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.604 [2024-12-06 13:17:43.095051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.605 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.864 "name": "raid_bdev1", 00:23:36.864 "aliases": [ 00:23:36.864 "45d116c4-7b16-4be5-ae8b-b3c106d085ba" 00:23:36.864 ], 00:23:36.864 "product_name": "Raid Volume", 00:23:36.864 "block_size": 512, 00:23:36.864 "num_blocks": 190464, 00:23:36.864 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:36.864 "assigned_rate_limits": { 00:23:36.864 "rw_ios_per_sec": 0, 00:23:36.864 "rw_mbytes_per_sec": 0, 00:23:36.864 "r_mbytes_per_sec": 0, 00:23:36.864 "w_mbytes_per_sec": 0 00:23:36.864 }, 00:23:36.864 "claimed": false, 00:23:36.864 "zoned": false, 00:23:36.864 "supported_io_types": { 00:23:36.864 "read": true, 00:23:36.864 "write": true, 00:23:36.864 "unmap": false, 00:23:36.864 "flush": false, 00:23:36.864 "reset": true, 00:23:36.864 "nvme_admin": false, 00:23:36.864 "nvme_io": false, 00:23:36.864 "nvme_io_md": false, 00:23:36.864 "write_zeroes": true, 00:23:36.864 "zcopy": false, 00:23:36.864 "get_zone_info": false, 00:23:36.864 "zone_management": false, 00:23:36.864 "zone_append": false, 00:23:36.864 "compare": false, 00:23:36.864 "compare_and_write": false, 00:23:36.864 "abort": false, 00:23:36.864 "seek_hole": false, 00:23:36.864 "seek_data": false, 00:23:36.864 "copy": false, 00:23:36.864 "nvme_iov_md": false 00:23:36.864 }, 00:23:36.864 "driver_specific": { 00:23:36.864 "raid": { 00:23:36.864 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:36.864 "strip_size_kb": 64, 00:23:36.864 "state": "online", 00:23:36.864 "raid_level": "raid5f", 00:23:36.864 "superblock": true, 00:23:36.864 "num_base_bdevs": 4, 00:23:36.864 "num_base_bdevs_discovered": 4, 00:23:36.864 "num_base_bdevs_operational": 4, 00:23:36.864 "base_bdevs_list": [ 00:23:36.864 { 00:23:36.864 "name": "pt1", 00:23:36.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:36.864 "is_configured": true, 00:23:36.864 "data_offset": 2048, 00:23:36.864 "data_size": 63488 00:23:36.864 }, 00:23:36.864 { 00:23:36.864 "name": "pt2", 00:23:36.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:36.864 "is_configured": true, 00:23:36.864 "data_offset": 2048, 00:23:36.864 "data_size": 63488 00:23:36.864 }, 00:23:36.864 { 00:23:36.864 "name": "pt3", 00:23:36.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:36.864 "is_configured": true, 00:23:36.864 "data_offset": 2048, 00:23:36.864 "data_size": 63488 00:23:36.864 }, 00:23:36.864 { 00:23:36.864 "name": "pt4", 00:23:36.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:36.864 "is_configured": true, 00:23:36.864 "data_offset": 2048, 00:23:36.864 "data_size": 63488 00:23:36.864 } 00:23:36.864 ] 00:23:36.864 } 00:23:36.864 } 00:23:36.864 }' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:36.864 pt2 00:23:36.864 pt3 00:23:36.864 pt4' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.864 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.132 [2024-12-06 13:17:43.435115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 45d116c4-7b16-4be5-ae8b-b3c106d085ba '!=' 45d116c4-7b16-4be5-ae8b-b3c106d085ba ']' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.132 [2024-12-06 13:17:43.486991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.132 "name": "raid_bdev1", 00:23:37.132 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:37.132 "strip_size_kb": 64, 00:23:37.132 "state": "online", 00:23:37.132 "raid_level": "raid5f", 00:23:37.132 "superblock": true, 00:23:37.132 "num_base_bdevs": 4, 00:23:37.132 "num_base_bdevs_discovered": 3, 00:23:37.132 "num_base_bdevs_operational": 3, 00:23:37.132 "base_bdevs_list": [ 00:23:37.132 { 00:23:37.132 "name": null, 00:23:37.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.132 "is_configured": false, 00:23:37.132 "data_offset": 0, 00:23:37.132 "data_size": 63488 00:23:37.132 }, 00:23:37.132 { 00:23:37.132 "name": "pt2", 00:23:37.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.132 "is_configured": true, 00:23:37.132 "data_offset": 2048, 00:23:37.132 "data_size": 63488 00:23:37.132 }, 00:23:37.132 { 00:23:37.132 "name": "pt3", 00:23:37.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:37.132 "is_configured": true, 00:23:37.132 "data_offset": 2048, 00:23:37.132 "data_size": 63488 00:23:37.132 }, 00:23:37.132 { 00:23:37.132 "name": "pt4", 00:23:37.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:37.132 "is_configured": true, 00:23:37.132 "data_offset": 2048, 00:23:37.132 "data_size": 63488 00:23:37.132 } 00:23:37.132 ] 00:23:37.132 }' 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.132 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.699 [2024-12-06 13:17:44.027054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:37.699 [2024-12-06 13:17:44.027097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:37.699 [2024-12-06 13:17:44.027202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.699 [2024-12-06 13:17:44.027306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.699 [2024-12-06 13:17:44.027332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.699 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.700 [2024-12-06 13:17:44.111035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:37.700 [2024-12-06 13:17:44.111102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.700 [2024-12-06 13:17:44.111130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:37.700 [2024-12-06 13:17:44.111144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.700 [2024-12-06 13:17:44.114046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.700 [2024-12-06 13:17:44.114085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:37.700 [2024-12-06 13:17:44.114191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:37.700 [2024-12-06 13:17:44.114266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:37.700 pt2 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.700 "name": "raid_bdev1", 00:23:37.700 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:37.700 "strip_size_kb": 64, 00:23:37.700 "state": "configuring", 00:23:37.700 "raid_level": "raid5f", 00:23:37.700 "superblock": true, 00:23:37.700 "num_base_bdevs": 4, 00:23:37.700 "num_base_bdevs_discovered": 1, 00:23:37.700 "num_base_bdevs_operational": 3, 00:23:37.700 "base_bdevs_list": [ 00:23:37.700 { 00:23:37.700 "name": null, 00:23:37.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.700 "is_configured": false, 00:23:37.700 "data_offset": 2048, 00:23:37.700 "data_size": 63488 00:23:37.700 }, 00:23:37.700 { 00:23:37.700 "name": "pt2", 00:23:37.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:37.700 "is_configured": true, 00:23:37.700 "data_offset": 2048, 00:23:37.700 "data_size": 63488 00:23:37.700 }, 00:23:37.700 { 00:23:37.700 "name": null, 00:23:37.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:37.700 "is_configured": false, 00:23:37.700 "data_offset": 2048, 00:23:37.700 "data_size": 63488 00:23:37.700 }, 00:23:37.700 { 00:23:37.700 "name": null, 00:23:37.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:37.700 "is_configured": false, 00:23:37.700 "data_offset": 2048, 00:23:37.700 "data_size": 63488 00:23:37.700 } 00:23:37.700 ] 00:23:37.700 }' 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.700 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.284 [2024-12-06 13:17:44.667254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:38.284 [2024-12-06 13:17:44.667360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.284 [2024-12-06 13:17:44.667398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:38.284 [2024-12-06 13:17:44.667413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.284 [2024-12-06 13:17:44.668008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.284 [2024-12-06 13:17:44.668034] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:38.284 [2024-12-06 13:17:44.668158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:38.284 [2024-12-06 13:17:44.668190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:38.284 pt3 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.284 "name": "raid_bdev1", 00:23:38.284 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:38.284 "strip_size_kb": 64, 00:23:38.284 "state": "configuring", 00:23:38.284 "raid_level": "raid5f", 00:23:38.284 "superblock": true, 00:23:38.284 "num_base_bdevs": 4, 00:23:38.284 "num_base_bdevs_discovered": 2, 00:23:38.284 "num_base_bdevs_operational": 3, 00:23:38.284 "base_bdevs_list": [ 00:23:38.284 { 00:23:38.284 "name": null, 00:23:38.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.284 "is_configured": false, 00:23:38.284 "data_offset": 2048, 00:23:38.284 "data_size": 63488 00:23:38.284 }, 00:23:38.284 { 00:23:38.284 "name": "pt2", 00:23:38.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:38.284 "is_configured": true, 00:23:38.284 "data_offset": 2048, 00:23:38.284 "data_size": 63488 00:23:38.284 }, 00:23:38.284 { 00:23:38.284 "name": "pt3", 00:23:38.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:38.284 "is_configured": true, 00:23:38.284 "data_offset": 2048, 00:23:38.284 "data_size": 63488 00:23:38.284 }, 00:23:38.284 { 00:23:38.284 "name": null, 00:23:38.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:38.284 "is_configured": false, 00:23:38.284 "data_offset": 2048, 00:23:38.284 "data_size": 63488 00:23:38.284 } 00:23:38.284 ] 00:23:38.284 }' 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.284 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.852 [2024-12-06 13:17:45.191384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:38.852 [2024-12-06 13:17:45.191475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.852 [2024-12-06 13:17:45.191515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:38.852 [2024-12-06 13:17:45.191529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.852 [2024-12-06 13:17:45.192099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.852 [2024-12-06 13:17:45.192125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:38.852 [2024-12-06 13:17:45.192231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:38.852 [2024-12-06 13:17:45.192270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:38.852 [2024-12-06 13:17:45.192442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:38.852 [2024-12-06 13:17:45.192475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:38.852 [2024-12-06 13:17:45.192789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:38.852 [2024-12-06 13:17:45.199243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:38.852 [2024-12-06 13:17:45.199282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:38.852 [2024-12-06 13:17:45.199655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.852 pt4 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.852 "name": "raid_bdev1", 00:23:38.852 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:38.852 "strip_size_kb": 64, 00:23:38.852 "state": "online", 00:23:38.852 "raid_level": "raid5f", 00:23:38.852 "superblock": true, 00:23:38.852 "num_base_bdevs": 4, 00:23:38.852 "num_base_bdevs_discovered": 3, 00:23:38.852 "num_base_bdevs_operational": 3, 00:23:38.852 "base_bdevs_list": [ 00:23:38.852 { 00:23:38.852 "name": null, 00:23:38.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.852 "is_configured": false, 00:23:38.852 "data_offset": 2048, 00:23:38.852 "data_size": 63488 00:23:38.852 }, 00:23:38.852 { 00:23:38.852 "name": "pt2", 00:23:38.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:38.852 "is_configured": true, 00:23:38.852 "data_offset": 2048, 00:23:38.852 "data_size": 63488 00:23:38.852 }, 00:23:38.852 { 00:23:38.852 "name": "pt3", 00:23:38.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:38.852 "is_configured": true, 00:23:38.852 "data_offset": 2048, 00:23:38.852 "data_size": 63488 00:23:38.852 }, 00:23:38.852 { 00:23:38.852 "name": "pt4", 00:23:38.852 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:38.852 "is_configured": true, 00:23:38.852 "data_offset": 2048, 00:23:38.852 "data_size": 63488 00:23:38.852 } 00:23:38.852 ] 00:23:38.852 }' 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.852 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 [2024-12-06 13:17:45.747087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.419 [2024-12-06 13:17:45.747125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.419 [2024-12-06 13:17:45.747228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.419 [2024-12-06 13:17:45.747326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.419 [2024-12-06 13:17:45.747355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 [2024-12-06 13:17:45.811084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:39.419 [2024-12-06 13:17:45.811157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.419 [2024-12-06 13:17:45.811193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:39.419 [2024-12-06 13:17:45.811211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.419 [2024-12-06 13:17:45.814120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.419 [2024-12-06 13:17:45.814163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:39.419 [2024-12-06 13:17:45.814282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:39.419 [2024-12-06 13:17:45.814352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:39.419 [2024-12-06 13:17:45.814556] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:39.419 [2024-12-06 13:17:45.814582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.419 [2024-12-06 13:17:45.814603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:39.419 [2024-12-06 13:17:45.814680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:39.419 [2024-12-06 13:17:45.814820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:39.419 pt1 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.419 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.419 "name": "raid_bdev1", 00:23:39.419 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:39.419 "strip_size_kb": 64, 00:23:39.419 "state": "configuring", 00:23:39.419 "raid_level": "raid5f", 00:23:39.419 "superblock": true, 00:23:39.419 "num_base_bdevs": 4, 00:23:39.419 "num_base_bdevs_discovered": 2, 00:23:39.419 "num_base_bdevs_operational": 3, 00:23:39.419 "base_bdevs_list": [ 00:23:39.419 { 00:23:39.419 "name": null, 00:23:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.419 "is_configured": false, 00:23:39.419 "data_offset": 2048, 00:23:39.419 "data_size": 63488 00:23:39.419 }, 00:23:39.419 { 00:23:39.419 "name": "pt2", 00:23:39.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:39.419 "is_configured": true, 00:23:39.419 "data_offset": 2048, 00:23:39.419 "data_size": 63488 00:23:39.419 }, 00:23:39.419 { 00:23:39.419 "name": "pt3", 00:23:39.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:39.419 "is_configured": true, 00:23:39.419 "data_offset": 2048, 00:23:39.419 "data_size": 63488 00:23:39.420 }, 00:23:39.420 { 00:23:39.420 "name": null, 00:23:39.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:39.420 "is_configured": false, 00:23:39.420 "data_offset": 2048, 00:23:39.420 "data_size": 63488 00:23:39.420 } 00:23:39.420 ] 00:23:39.420 }' 00:23:39.420 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.420 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.986 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.986 [2024-12-06 13:17:46.371283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:39.986 [2024-12-06 13:17:46.371358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.986 [2024-12-06 13:17:46.371397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:39.986 [2024-12-06 13:17:46.371412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.986 [2024-12-06 13:17:46.371985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.986 [2024-12-06 13:17:46.372010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:39.986 [2024-12-06 13:17:46.372118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:39.986 [2024-12-06 13:17:46.372151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:39.986 [2024-12-06 13:17:46.372325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:39.986 [2024-12-06 13:17:46.372341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:39.986 [2024-12-06 13:17:46.372677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:39.987 [2024-12-06 13:17:46.379152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:39.987 [2024-12-06 13:17:46.379187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:39.987 [2024-12-06 13:17:46.379537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.987 pt4 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.987 "name": "raid_bdev1", 00:23:39.987 "uuid": "45d116c4-7b16-4be5-ae8b-b3c106d085ba", 00:23:39.987 "strip_size_kb": 64, 00:23:39.987 "state": "online", 00:23:39.987 "raid_level": "raid5f", 00:23:39.987 "superblock": true, 00:23:39.987 "num_base_bdevs": 4, 00:23:39.987 "num_base_bdevs_discovered": 3, 00:23:39.987 "num_base_bdevs_operational": 3, 00:23:39.987 "base_bdevs_list": [ 00:23:39.987 { 00:23:39.987 "name": null, 00:23:39.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.987 "is_configured": false, 00:23:39.987 "data_offset": 2048, 00:23:39.987 "data_size": 63488 00:23:39.987 }, 00:23:39.987 { 00:23:39.987 "name": "pt2", 00:23:39.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:39.987 "is_configured": true, 00:23:39.987 "data_offset": 2048, 00:23:39.987 "data_size": 63488 00:23:39.987 }, 00:23:39.987 { 00:23:39.987 "name": "pt3", 00:23:39.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:39.987 "is_configured": true, 00:23:39.987 "data_offset": 2048, 00:23:39.987 "data_size": 63488 00:23:39.987 }, 00:23:39.987 { 00:23:39.987 "name": "pt4", 00:23:39.987 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:39.987 "is_configured": true, 00:23:39.987 "data_offset": 2048, 00:23:39.987 "data_size": 63488 00:23:39.987 } 00:23:39.987 ] 00:23:39.987 }' 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.987 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:40.583 [2024-12-06 13:17:46.923237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 45d116c4-7b16-4be5-ae8b-b3c106d085ba '!=' 45d116c4-7b16-4be5-ae8b-b3c106d085ba ']' 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84861 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84861 ']' 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84861 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84861 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.583 killing process with pid 84861 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84861' 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84861 00:23:40.583 [2024-12-06 13:17:46.997186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.583 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84861 00:23:40.583 [2024-12-06 13:17:46.997321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.583 [2024-12-06 13:17:46.997424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.583 [2024-12-06 13:17:46.997468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:40.842 [2024-12-06 13:17:47.351357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:42.216 ************************************ 00:23:42.216 END TEST raid5f_superblock_test 00:23:42.216 ************************************ 00:23:42.216 13:17:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:42.216 00:23:42.216 real 0m9.359s 00:23:42.216 user 0m15.310s 00:23:42.216 sys 0m1.390s 00:23:42.216 13:17:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.216 13:17:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.216 13:17:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:42.216 13:17:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:42.216 13:17:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:42.216 13:17:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.216 13:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:42.216 ************************************ 00:23:42.216 START TEST raid5f_rebuild_test 00:23:42.216 ************************************ 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85352 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85352 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85352 ']' 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.216 13:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.216 [2024-12-06 13:17:48.566339] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:23:42.216 [2024-12-06 13:17:48.566729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:23:42.216 Zero copy mechanism will not be used. 00:23:42.216 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85352 ] 00:23:42.216 [2024-12-06 13:17:48.740839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.474 [2024-12-06 13:17:48.875334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.732 [2024-12-06 13:17:49.087814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:42.732 [2024-12-06 13:17:49.088063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 BaseBdev1_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 [2024-12-06 13:17:49.606014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:43.314 [2024-12-06 13:17:49.606102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.314 [2024-12-06 13:17:49.606135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:43.314 [2024-12-06 13:17:49.606154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.314 [2024-12-06 13:17:49.608938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.314 [2024-12-06 13:17:49.608991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:43.314 BaseBdev1 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 BaseBdev2_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 [2024-12-06 13:17:49.658171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:43.314 [2024-12-06 13:17:49.658435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.314 [2024-12-06 13:17:49.658514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:43.314 [2024-12-06 13:17:49.658541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.314 [2024-12-06 13:17:49.661390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.314 [2024-12-06 13:17:49.661574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:43.314 BaseBdev2 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 BaseBdev3_malloc 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.314 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.314 [2024-12-06 13:17:49.727998] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:43.314 [2024-12-06 13:17:49.728096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.315 [2024-12-06 13:17:49.728140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:43.315 [2024-12-06 13:17:49.728167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.315 [2024-12-06 13:17:49.731381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.315 [2024-12-06 13:17:49.731584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:43.315 BaseBdev3 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 BaseBdev4_malloc 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 [2024-12-06 13:17:49.791137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:43.315 [2024-12-06 13:17:49.791361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.315 [2024-12-06 13:17:49.791410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:43.315 [2024-12-06 13:17:49.791466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.315 [2024-12-06 13:17:49.794694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.315 [2024-12-06 13:17:49.794750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:43.315 BaseBdev4 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 spare_malloc 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.315 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.572 spare_delay 00:23:43.572 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.573 [2024-12-06 13:17:49.851281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.573 [2024-12-06 13:17:49.851352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.573 [2024-12-06 13:17:49.851381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:43.573 [2024-12-06 13:17:49.851399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.573 [2024-12-06 13:17:49.854148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.573 [2024-12-06 13:17:49.854329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.573 spare 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.573 [2024-12-06 13:17:49.859346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:43.573 [2024-12-06 13:17:49.861764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.573 [2024-12-06 13:17:49.861852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:43.573 [2024-12-06 13:17:49.861932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:43.573 [2024-12-06 13:17:49.862059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:43.573 [2024-12-06 13:17:49.862081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:43.573 [2024-12-06 13:17:49.862428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:43.573 [2024-12-06 13:17:49.869105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:43.573 [2024-12-06 13:17:49.869255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:43.573 [2024-12-06 13:17:49.869581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.573 "name": "raid_bdev1", 00:23:43.573 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:43.573 "strip_size_kb": 64, 00:23:43.573 "state": "online", 00:23:43.573 "raid_level": "raid5f", 00:23:43.573 "superblock": false, 00:23:43.573 "num_base_bdevs": 4, 00:23:43.573 "num_base_bdevs_discovered": 4, 00:23:43.573 "num_base_bdevs_operational": 4, 00:23:43.573 "base_bdevs_list": [ 00:23:43.573 { 00:23:43.573 "name": "BaseBdev1", 00:23:43.573 "uuid": "a9adc40d-4bc9-579c-8c49-3f4c7660d00a", 00:23:43.573 "is_configured": true, 00:23:43.573 "data_offset": 0, 00:23:43.573 "data_size": 65536 00:23:43.573 }, 00:23:43.573 { 00:23:43.573 "name": "BaseBdev2", 00:23:43.573 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:43.573 "is_configured": true, 00:23:43.573 "data_offset": 0, 00:23:43.573 "data_size": 65536 00:23:43.573 }, 00:23:43.573 { 00:23:43.573 "name": "BaseBdev3", 00:23:43.573 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:43.573 "is_configured": true, 00:23:43.573 "data_offset": 0, 00:23:43.573 "data_size": 65536 00:23:43.573 }, 00:23:43.573 { 00:23:43.573 "name": "BaseBdev4", 00:23:43.573 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:43.573 "is_configured": true, 00:23:43.573 "data_offset": 0, 00:23:43.573 "data_size": 65536 00:23:43.573 } 00:23:43.573 ] 00:23:43.573 }' 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.573 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:44.138 [2024-12-06 13:17:50.389380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:44.138 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:44.408 [2024-12-06 13:17:50.729255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:44.408 /dev/nbd0 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.408 1+0 records in 00:23:44.408 1+0 records out 00:23:44.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486927 s, 8.4 MB/s 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.408 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:44.409 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:44.975 512+0 records in 00:23:44.975 512+0 records out 00:23:44.976 100663296 bytes (101 MB, 96 MiB) copied, 0.638409 s, 158 MB/s 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.976 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:45.233 [2024-12-06 13:17:51.696043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:45.233 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.234 [2024-12-06 13:17:51.711894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.234 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.492 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.492 "name": "raid_bdev1", 00:23:45.492 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:45.492 "strip_size_kb": 64, 00:23:45.492 "state": "online", 00:23:45.492 "raid_level": "raid5f", 00:23:45.492 "superblock": false, 00:23:45.492 "num_base_bdevs": 4, 00:23:45.492 "num_base_bdevs_discovered": 3, 00:23:45.492 "num_base_bdevs_operational": 3, 00:23:45.492 "base_bdevs_list": [ 00:23:45.492 { 00:23:45.492 "name": null, 00:23:45.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.492 "is_configured": false, 00:23:45.492 "data_offset": 0, 00:23:45.492 "data_size": 65536 00:23:45.492 }, 00:23:45.492 { 00:23:45.492 "name": "BaseBdev2", 00:23:45.492 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:45.492 "is_configured": true, 00:23:45.492 "data_offset": 0, 00:23:45.492 "data_size": 65536 00:23:45.492 }, 00:23:45.492 { 00:23:45.492 "name": "BaseBdev3", 00:23:45.492 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:45.492 "is_configured": true, 00:23:45.492 "data_offset": 0, 00:23:45.492 "data_size": 65536 00:23:45.492 }, 00:23:45.492 { 00:23:45.492 "name": "BaseBdev4", 00:23:45.492 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:45.492 "is_configured": true, 00:23:45.492 "data_offset": 0, 00:23:45.492 "data_size": 65536 00:23:45.492 } 00:23:45.492 ] 00:23:45.492 }' 00:23:45.492 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.492 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.751 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:45.751 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.751 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.751 [2024-12-06 13:17:52.204008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.751 [2024-12-06 13:17:52.218212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:45.751 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.751 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:45.751 [2024-12-06 13:17:52.227187] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.125 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.125 "name": "raid_bdev1", 00:23:47.125 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:47.125 "strip_size_kb": 64, 00:23:47.125 "state": "online", 00:23:47.125 "raid_level": "raid5f", 00:23:47.125 "superblock": false, 00:23:47.125 "num_base_bdevs": 4, 00:23:47.125 "num_base_bdevs_discovered": 4, 00:23:47.125 "num_base_bdevs_operational": 4, 00:23:47.125 "process": { 00:23:47.125 "type": "rebuild", 00:23:47.125 "target": "spare", 00:23:47.125 "progress": { 00:23:47.125 "blocks": 17280, 00:23:47.125 "percent": 8 00:23:47.125 } 00:23:47.125 }, 00:23:47.125 "base_bdevs_list": [ 00:23:47.125 { 00:23:47.125 "name": "spare", 00:23:47.125 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:47.125 "is_configured": true, 00:23:47.125 "data_offset": 0, 00:23:47.125 "data_size": 65536 00:23:47.125 }, 00:23:47.125 { 00:23:47.125 "name": "BaseBdev2", 00:23:47.125 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:47.125 "is_configured": true, 00:23:47.125 "data_offset": 0, 00:23:47.125 "data_size": 65536 00:23:47.125 }, 00:23:47.126 { 00:23:47.126 "name": "BaseBdev3", 00:23:47.126 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:47.126 "is_configured": true, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 }, 00:23:47.126 { 00:23:47.126 "name": "BaseBdev4", 00:23:47.126 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:47.126 "is_configured": true, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 } 00:23:47.126 ] 00:23:47.126 }' 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.126 [2024-12-06 13:17:53.388415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:47.126 [2024-12-06 13:17:53.440228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:47.126 [2024-12-06 13:17:53.440339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.126 [2024-12-06 13:17:53.440367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:47.126 [2024-12-06 13:17:53.440383] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.126 "name": "raid_bdev1", 00:23:47.126 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:47.126 "strip_size_kb": 64, 00:23:47.126 "state": "online", 00:23:47.126 "raid_level": "raid5f", 00:23:47.126 "superblock": false, 00:23:47.126 "num_base_bdevs": 4, 00:23:47.126 "num_base_bdevs_discovered": 3, 00:23:47.126 "num_base_bdevs_operational": 3, 00:23:47.126 "base_bdevs_list": [ 00:23:47.126 { 00:23:47.126 "name": null, 00:23:47.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.126 "is_configured": false, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 }, 00:23:47.126 { 00:23:47.126 "name": "BaseBdev2", 00:23:47.126 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:47.126 "is_configured": true, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 }, 00:23:47.126 { 00:23:47.126 "name": "BaseBdev3", 00:23:47.126 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:47.126 "is_configured": true, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 }, 00:23:47.126 { 00:23:47.126 "name": "BaseBdev4", 00:23:47.126 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:47.126 "is_configured": true, 00:23:47.126 "data_offset": 0, 00:23:47.126 "data_size": 65536 00:23:47.126 } 00:23:47.126 ] 00:23:47.126 }' 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.126 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.692 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.692 "name": "raid_bdev1", 00:23:47.692 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:47.692 "strip_size_kb": 64, 00:23:47.692 "state": "online", 00:23:47.692 "raid_level": "raid5f", 00:23:47.692 "superblock": false, 00:23:47.692 "num_base_bdevs": 4, 00:23:47.692 "num_base_bdevs_discovered": 3, 00:23:47.692 "num_base_bdevs_operational": 3, 00:23:47.692 "base_bdevs_list": [ 00:23:47.692 { 00:23:47.692 "name": null, 00:23:47.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.692 "is_configured": false, 00:23:47.692 "data_offset": 0, 00:23:47.692 "data_size": 65536 00:23:47.692 }, 00:23:47.692 { 00:23:47.692 "name": "BaseBdev2", 00:23:47.692 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:47.692 "is_configured": true, 00:23:47.692 "data_offset": 0, 00:23:47.692 "data_size": 65536 00:23:47.692 }, 00:23:47.692 { 00:23:47.692 "name": "BaseBdev3", 00:23:47.692 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:47.692 "is_configured": true, 00:23:47.692 "data_offset": 0, 00:23:47.692 "data_size": 65536 00:23:47.692 }, 00:23:47.692 { 00:23:47.692 "name": "BaseBdev4", 00:23:47.692 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:47.692 "is_configured": true, 00:23:47.692 "data_offset": 0, 00:23:47.692 "data_size": 65536 00:23:47.692 } 00:23:47.692 ] 00:23:47.692 }' 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.692 [2024-12-06 13:17:54.127285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.692 [2024-12-06 13:17:54.140911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.692 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:47.692 [2024-12-06 13:17:54.149971] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.637 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.895 "name": "raid_bdev1", 00:23:48.895 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:48.895 "strip_size_kb": 64, 00:23:48.895 "state": "online", 00:23:48.895 "raid_level": "raid5f", 00:23:48.895 "superblock": false, 00:23:48.895 "num_base_bdevs": 4, 00:23:48.895 "num_base_bdevs_discovered": 4, 00:23:48.895 "num_base_bdevs_operational": 4, 00:23:48.895 "process": { 00:23:48.895 "type": "rebuild", 00:23:48.895 "target": "spare", 00:23:48.895 "progress": { 00:23:48.895 "blocks": 17280, 00:23:48.895 "percent": 8 00:23:48.895 } 00:23:48.895 }, 00:23:48.895 "base_bdevs_list": [ 00:23:48.895 { 00:23:48.895 "name": "spare", 00:23:48.895 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:48.895 "is_configured": true, 00:23:48.895 "data_offset": 0, 00:23:48.895 "data_size": 65536 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "name": "BaseBdev2", 00:23:48.895 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:48.895 "is_configured": true, 00:23:48.895 "data_offset": 0, 00:23:48.895 "data_size": 65536 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "name": "BaseBdev3", 00:23:48.895 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:48.895 "is_configured": true, 00:23:48.895 "data_offset": 0, 00:23:48.895 "data_size": 65536 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "name": "BaseBdev4", 00:23:48.895 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:48.895 "is_configured": true, 00:23:48.895 "data_offset": 0, 00:23:48.895 "data_size": 65536 00:23:48.895 } 00:23:48.895 ] 00:23:48.895 }' 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.895 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.896 "name": "raid_bdev1", 00:23:48.896 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:48.896 "strip_size_kb": 64, 00:23:48.896 "state": "online", 00:23:48.896 "raid_level": "raid5f", 00:23:48.896 "superblock": false, 00:23:48.896 "num_base_bdevs": 4, 00:23:48.896 "num_base_bdevs_discovered": 4, 00:23:48.896 "num_base_bdevs_operational": 4, 00:23:48.896 "process": { 00:23:48.896 "type": "rebuild", 00:23:48.896 "target": "spare", 00:23:48.896 "progress": { 00:23:48.896 "blocks": 21120, 00:23:48.896 "percent": 10 00:23:48.896 } 00:23:48.896 }, 00:23:48.896 "base_bdevs_list": [ 00:23:48.896 { 00:23:48.896 "name": "spare", 00:23:48.896 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:48.896 "is_configured": true, 00:23:48.896 "data_offset": 0, 00:23:48.896 "data_size": 65536 00:23:48.896 }, 00:23:48.896 { 00:23:48.896 "name": "BaseBdev2", 00:23:48.896 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:48.896 "is_configured": true, 00:23:48.896 "data_offset": 0, 00:23:48.896 "data_size": 65536 00:23:48.896 }, 00:23:48.896 { 00:23:48.896 "name": "BaseBdev3", 00:23:48.896 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:48.896 "is_configured": true, 00:23:48.896 "data_offset": 0, 00:23:48.896 "data_size": 65536 00:23:48.896 }, 00:23:48.896 { 00:23:48.896 "name": "BaseBdev4", 00:23:48.896 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:48.896 "is_configured": true, 00:23:48.896 "data_offset": 0, 00:23:48.896 "data_size": 65536 00:23:48.896 } 00:23:48.896 ] 00:23:48.896 }' 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.896 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.153 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.153 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.089 "name": "raid_bdev1", 00:23:50.089 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:50.089 "strip_size_kb": 64, 00:23:50.089 "state": "online", 00:23:50.089 "raid_level": "raid5f", 00:23:50.089 "superblock": false, 00:23:50.089 "num_base_bdevs": 4, 00:23:50.089 "num_base_bdevs_discovered": 4, 00:23:50.089 "num_base_bdevs_operational": 4, 00:23:50.089 "process": { 00:23:50.089 "type": "rebuild", 00:23:50.089 "target": "spare", 00:23:50.089 "progress": { 00:23:50.089 "blocks": 44160, 00:23:50.089 "percent": 22 00:23:50.089 } 00:23:50.089 }, 00:23:50.089 "base_bdevs_list": [ 00:23:50.089 { 00:23:50.089 "name": "spare", 00:23:50.089 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:50.089 "is_configured": true, 00:23:50.089 "data_offset": 0, 00:23:50.089 "data_size": 65536 00:23:50.089 }, 00:23:50.089 { 00:23:50.089 "name": "BaseBdev2", 00:23:50.089 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:50.089 "is_configured": true, 00:23:50.089 "data_offset": 0, 00:23:50.089 "data_size": 65536 00:23:50.089 }, 00:23:50.089 { 00:23:50.089 "name": "BaseBdev3", 00:23:50.089 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:50.089 "is_configured": true, 00:23:50.089 "data_offset": 0, 00:23:50.089 "data_size": 65536 00:23:50.089 }, 00:23:50.089 { 00:23:50.089 "name": "BaseBdev4", 00:23:50.089 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:50.089 "is_configured": true, 00:23:50.089 "data_offset": 0, 00:23:50.089 "data_size": 65536 00:23:50.089 } 00:23:50.089 ] 00:23:50.089 }' 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.089 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.347 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.347 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.284 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.284 "name": "raid_bdev1", 00:23:51.284 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:51.284 "strip_size_kb": 64, 00:23:51.284 "state": "online", 00:23:51.284 "raid_level": "raid5f", 00:23:51.284 "superblock": false, 00:23:51.284 "num_base_bdevs": 4, 00:23:51.284 "num_base_bdevs_discovered": 4, 00:23:51.284 "num_base_bdevs_operational": 4, 00:23:51.284 "process": { 00:23:51.284 "type": "rebuild", 00:23:51.284 "target": "spare", 00:23:51.284 "progress": { 00:23:51.284 "blocks": 65280, 00:23:51.284 "percent": 33 00:23:51.284 } 00:23:51.284 }, 00:23:51.284 "base_bdevs_list": [ 00:23:51.284 { 00:23:51.284 "name": "spare", 00:23:51.284 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:51.285 "is_configured": true, 00:23:51.285 "data_offset": 0, 00:23:51.285 "data_size": 65536 00:23:51.285 }, 00:23:51.285 { 00:23:51.285 "name": "BaseBdev2", 00:23:51.285 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:51.285 "is_configured": true, 00:23:51.285 "data_offset": 0, 00:23:51.285 "data_size": 65536 00:23:51.285 }, 00:23:51.285 { 00:23:51.285 "name": "BaseBdev3", 00:23:51.285 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:51.285 "is_configured": true, 00:23:51.285 "data_offset": 0, 00:23:51.285 "data_size": 65536 00:23:51.285 }, 00:23:51.285 { 00:23:51.285 "name": "BaseBdev4", 00:23:51.285 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:51.285 "is_configured": true, 00:23:51.285 "data_offset": 0, 00:23:51.285 "data_size": 65536 00:23:51.285 } 00:23:51.285 ] 00:23:51.285 }' 00:23:51.285 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.285 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.285 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.285 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.285 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:52.662 "name": "raid_bdev1", 00:23:52.662 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:52.662 "strip_size_kb": 64, 00:23:52.662 "state": "online", 00:23:52.662 "raid_level": "raid5f", 00:23:52.662 "superblock": false, 00:23:52.662 "num_base_bdevs": 4, 00:23:52.662 "num_base_bdevs_discovered": 4, 00:23:52.662 "num_base_bdevs_operational": 4, 00:23:52.662 "process": { 00:23:52.662 "type": "rebuild", 00:23:52.662 "target": "spare", 00:23:52.662 "progress": { 00:23:52.662 "blocks": 88320, 00:23:52.662 "percent": 44 00:23:52.662 } 00:23:52.662 }, 00:23:52.662 "base_bdevs_list": [ 00:23:52.662 { 00:23:52.662 "name": "spare", 00:23:52.662 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:52.662 "is_configured": true, 00:23:52.662 "data_offset": 0, 00:23:52.662 "data_size": 65536 00:23:52.662 }, 00:23:52.662 { 00:23:52.662 "name": "BaseBdev2", 00:23:52.662 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:52.662 "is_configured": true, 00:23:52.662 "data_offset": 0, 00:23:52.662 "data_size": 65536 00:23:52.662 }, 00:23:52.662 { 00:23:52.662 "name": "BaseBdev3", 00:23:52.662 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:52.662 "is_configured": true, 00:23:52.662 "data_offset": 0, 00:23:52.662 "data_size": 65536 00:23:52.662 }, 00:23:52.662 { 00:23:52.662 "name": "BaseBdev4", 00:23:52.662 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:52.662 "is_configured": true, 00:23:52.662 "data_offset": 0, 00:23:52.662 "data_size": 65536 00:23:52.662 } 00:23:52.662 ] 00:23:52.662 }' 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.662 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.597 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.597 "name": "raid_bdev1", 00:23:53.597 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:53.597 "strip_size_kb": 64, 00:23:53.597 "state": "online", 00:23:53.597 "raid_level": "raid5f", 00:23:53.597 "superblock": false, 00:23:53.597 "num_base_bdevs": 4, 00:23:53.597 "num_base_bdevs_discovered": 4, 00:23:53.597 "num_base_bdevs_operational": 4, 00:23:53.597 "process": { 00:23:53.597 "type": "rebuild", 00:23:53.597 "target": "spare", 00:23:53.597 "progress": { 00:23:53.597 "blocks": 109440, 00:23:53.597 "percent": 55 00:23:53.597 } 00:23:53.597 }, 00:23:53.597 "base_bdevs_list": [ 00:23:53.597 { 00:23:53.597 "name": "spare", 00:23:53.597 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:53.597 "is_configured": true, 00:23:53.597 "data_offset": 0, 00:23:53.597 "data_size": 65536 00:23:53.597 }, 00:23:53.597 { 00:23:53.597 "name": "BaseBdev2", 00:23:53.597 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:53.597 "is_configured": true, 00:23:53.597 "data_offset": 0, 00:23:53.597 "data_size": 65536 00:23:53.597 }, 00:23:53.597 { 00:23:53.597 "name": "BaseBdev3", 00:23:53.597 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:53.597 "is_configured": true, 00:23:53.597 "data_offset": 0, 00:23:53.597 "data_size": 65536 00:23:53.597 }, 00:23:53.597 { 00:23:53.597 "name": "BaseBdev4", 00:23:53.597 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:53.597 "is_configured": true, 00:23:53.597 "data_offset": 0, 00:23:53.597 "data_size": 65536 00:23:53.597 } 00:23:53.597 ] 00:23:53.597 }' 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.597 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:54.972 "name": "raid_bdev1", 00:23:54.972 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:54.972 "strip_size_kb": 64, 00:23:54.972 "state": "online", 00:23:54.972 "raid_level": "raid5f", 00:23:54.972 "superblock": false, 00:23:54.972 "num_base_bdevs": 4, 00:23:54.972 "num_base_bdevs_discovered": 4, 00:23:54.972 "num_base_bdevs_operational": 4, 00:23:54.972 "process": { 00:23:54.972 "type": "rebuild", 00:23:54.972 "target": "spare", 00:23:54.972 "progress": { 00:23:54.972 "blocks": 132480, 00:23:54.972 "percent": 67 00:23:54.972 } 00:23:54.972 }, 00:23:54.972 "base_bdevs_list": [ 00:23:54.972 { 00:23:54.972 "name": "spare", 00:23:54.972 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:54.972 "is_configured": true, 00:23:54.972 "data_offset": 0, 00:23:54.972 "data_size": 65536 00:23:54.972 }, 00:23:54.972 { 00:23:54.972 "name": "BaseBdev2", 00:23:54.972 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:54.972 "is_configured": true, 00:23:54.972 "data_offset": 0, 00:23:54.972 "data_size": 65536 00:23:54.972 }, 00:23:54.972 { 00:23:54.972 "name": "BaseBdev3", 00:23:54.972 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:54.972 "is_configured": true, 00:23:54.972 "data_offset": 0, 00:23:54.972 "data_size": 65536 00:23:54.972 }, 00:23:54.972 { 00:23:54.972 "name": "BaseBdev4", 00:23:54.972 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:54.972 "is_configured": true, 00:23:54.972 "data_offset": 0, 00:23:54.972 "data_size": 65536 00:23:54.972 } 00:23:54.972 ] 00:23:54.972 }' 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.972 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:55.907 "name": "raid_bdev1", 00:23:55.907 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:55.907 "strip_size_kb": 64, 00:23:55.907 "state": "online", 00:23:55.907 "raid_level": "raid5f", 00:23:55.907 "superblock": false, 00:23:55.907 "num_base_bdevs": 4, 00:23:55.907 "num_base_bdevs_discovered": 4, 00:23:55.907 "num_base_bdevs_operational": 4, 00:23:55.907 "process": { 00:23:55.907 "type": "rebuild", 00:23:55.907 "target": "spare", 00:23:55.907 "progress": { 00:23:55.907 "blocks": 153600, 00:23:55.907 "percent": 78 00:23:55.907 } 00:23:55.907 }, 00:23:55.907 "base_bdevs_list": [ 00:23:55.907 { 00:23:55.907 "name": "spare", 00:23:55.907 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:55.907 "is_configured": true, 00:23:55.907 "data_offset": 0, 00:23:55.907 "data_size": 65536 00:23:55.907 }, 00:23:55.907 { 00:23:55.907 "name": "BaseBdev2", 00:23:55.907 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:55.907 "is_configured": true, 00:23:55.907 "data_offset": 0, 00:23:55.907 "data_size": 65536 00:23:55.907 }, 00:23:55.907 { 00:23:55.907 "name": "BaseBdev3", 00:23:55.907 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:55.907 "is_configured": true, 00:23:55.907 "data_offset": 0, 00:23:55.907 "data_size": 65536 00:23:55.907 }, 00:23:55.907 { 00:23:55.907 "name": "BaseBdev4", 00:23:55.907 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:55.907 "is_configured": true, 00:23:55.907 "data_offset": 0, 00:23:55.907 "data_size": 65536 00:23:55.907 } 00:23:55.907 ] 00:23:55.907 }' 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.907 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:56.166 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.166 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.102 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:57.102 "name": "raid_bdev1", 00:23:57.102 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:57.102 "strip_size_kb": 64, 00:23:57.102 "state": "online", 00:23:57.102 "raid_level": "raid5f", 00:23:57.102 "superblock": false, 00:23:57.102 "num_base_bdevs": 4, 00:23:57.102 "num_base_bdevs_discovered": 4, 00:23:57.102 "num_base_bdevs_operational": 4, 00:23:57.102 "process": { 00:23:57.102 "type": "rebuild", 00:23:57.102 "target": "spare", 00:23:57.103 "progress": { 00:23:57.103 "blocks": 176640, 00:23:57.103 "percent": 89 00:23:57.103 } 00:23:57.103 }, 00:23:57.103 "base_bdevs_list": [ 00:23:57.103 { 00:23:57.103 "name": "spare", 00:23:57.103 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:57.103 "is_configured": true, 00:23:57.103 "data_offset": 0, 00:23:57.103 "data_size": 65536 00:23:57.103 }, 00:23:57.103 { 00:23:57.103 "name": "BaseBdev2", 00:23:57.103 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:57.103 "is_configured": true, 00:23:57.103 "data_offset": 0, 00:23:57.103 "data_size": 65536 00:23:57.103 }, 00:23:57.103 { 00:23:57.103 "name": "BaseBdev3", 00:23:57.103 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:57.103 "is_configured": true, 00:23:57.103 "data_offset": 0, 00:23:57.103 "data_size": 65536 00:23:57.103 }, 00:23:57.103 { 00:23:57.103 "name": "BaseBdev4", 00:23:57.103 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:57.103 "is_configured": true, 00:23:57.103 "data_offset": 0, 00:23:57.103 "data_size": 65536 00:23:57.103 } 00:23:57.103 ] 00:23:57.103 }' 00:23:57.103 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:57.103 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.103 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:57.103 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.103 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:58.039 [2024-12-06 13:18:04.559496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:58.039 [2024-12-06 13:18:04.559596] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:58.039 [2024-12-06 13:18:04.559662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:58.298 "name": "raid_bdev1", 00:23:58.298 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:58.298 "strip_size_kb": 64, 00:23:58.298 "state": "online", 00:23:58.298 "raid_level": "raid5f", 00:23:58.298 "superblock": false, 00:23:58.298 "num_base_bdevs": 4, 00:23:58.298 "num_base_bdevs_discovered": 4, 00:23:58.298 "num_base_bdevs_operational": 4, 00:23:58.298 "base_bdevs_list": [ 00:23:58.298 { 00:23:58.298 "name": "spare", 00:23:58.298 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:58.298 "is_configured": true, 00:23:58.298 "data_offset": 0, 00:23:58.298 "data_size": 65536 00:23:58.298 }, 00:23:58.298 { 00:23:58.298 "name": "BaseBdev2", 00:23:58.298 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:58.298 "is_configured": true, 00:23:58.298 "data_offset": 0, 00:23:58.298 "data_size": 65536 00:23:58.298 }, 00:23:58.298 { 00:23:58.298 "name": "BaseBdev3", 00:23:58.298 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:58.298 "is_configured": true, 00:23:58.298 "data_offset": 0, 00:23:58.298 "data_size": 65536 00:23:58.298 }, 00:23:58.298 { 00:23:58.298 "name": "BaseBdev4", 00:23:58.298 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:58.298 "is_configured": true, 00:23:58.298 "data_offset": 0, 00:23:58.298 "data_size": 65536 00:23:58.298 } 00:23:58.298 ] 00:23:58.298 }' 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.298 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:58.557 "name": "raid_bdev1", 00:23:58.557 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:58.557 "strip_size_kb": 64, 00:23:58.557 "state": "online", 00:23:58.557 "raid_level": "raid5f", 00:23:58.557 "superblock": false, 00:23:58.557 "num_base_bdevs": 4, 00:23:58.557 "num_base_bdevs_discovered": 4, 00:23:58.557 "num_base_bdevs_operational": 4, 00:23:58.557 "base_bdevs_list": [ 00:23:58.557 { 00:23:58.557 "name": "spare", 00:23:58.557 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev2", 00:23:58.557 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev3", 00:23:58.557 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev4", 00:23:58.557 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 } 00:23:58.557 ] 00:23:58.557 }' 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.557 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.557 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.557 "name": "raid_bdev1", 00:23:58.557 "uuid": "64a19510-58a7-4f7a-99e3-900bca1ec92b", 00:23:58.557 "strip_size_kb": 64, 00:23:58.557 "state": "online", 00:23:58.557 "raid_level": "raid5f", 00:23:58.557 "superblock": false, 00:23:58.557 "num_base_bdevs": 4, 00:23:58.557 "num_base_bdevs_discovered": 4, 00:23:58.557 "num_base_bdevs_operational": 4, 00:23:58.557 "base_bdevs_list": [ 00:23:58.557 { 00:23:58.557 "name": "spare", 00:23:58.557 "uuid": "d812d186-9dea-501e-b926-fa46972fe00a", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev2", 00:23:58.557 "uuid": "c1964b9e-e781-5635-a6be-754f9e8d26e2", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev3", 00:23:58.557 "uuid": "fbcf4a03-1a31-5a30-9e28-9f015ee36f5a", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 }, 00:23:58.557 { 00:23:58.557 "name": "BaseBdev4", 00:23:58.557 "uuid": "ee23ceaf-191c-540a-8faa-fa5745df7a5e", 00:23:58.557 "is_configured": true, 00:23:58.557 "data_offset": 0, 00:23:58.557 "data_size": 65536 00:23:58.557 } 00:23:58.557 ] 00:23:58.557 }' 00:23:58.557 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.557 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.124 [2024-12-06 13:18:05.466578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.124 [2024-12-06 13:18:05.466772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.124 [2024-12-06 13:18:05.467006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.124 [2024-12-06 13:18:05.467256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.124 [2024-12-06 13:18:05.467413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.124 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:59.382 /dev/nbd0 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.382 1+0 records in 00:23:59.382 1+0 records out 00:23:59.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040805 s, 10.0 MB/s 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.382 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.383 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:59.950 /dev/nbd1 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.950 1+0 records in 00:23:59.950 1+0 records out 00:23:59.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350043 s, 11.7 MB/s 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.950 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:00.519 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:00.519 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85352 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85352 ']' 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85352 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85352 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.778 killing process with pid 85352 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85352' 00:24:00.778 Received shutdown signal, test time was about 60.000000 seconds 00:24:00.778 00:24:00.778 Latency(us) 00:24:00.778 [2024-12-06T13:18:07.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.778 [2024-12-06T13:18:07.307Z] =================================================================================================================== 00:24:00.778 [2024-12-06T13:18:07.307Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85352 00:24:00.778 [2024-12-06 13:18:07.084403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.778 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85352 00:24:01.038 [2024-12-06 13:18:07.523728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:02.418 ************************************ 00:24:02.418 END TEST raid5f_rebuild_test 00:24:02.418 ************************************ 00:24:02.418 00:24:02.418 real 0m20.102s 00:24:02.418 user 0m24.954s 00:24:02.418 sys 0m2.310s 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.418 13:18:08 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:24:02.418 13:18:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:02.418 13:18:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.418 13:18:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.418 ************************************ 00:24:02.418 START TEST raid5f_rebuild_test_sb 00:24:02.418 ************************************ 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:24:02.418 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85861 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85861 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85861 ']' 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.419 13:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.419 [2024-12-06 13:18:08.748208] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:02.419 [2024-12-06 13:18:08.748403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85861 ] 00:24:02.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:02.419 Zero copy mechanism will not be used. 00:24:02.419 [2024-12-06 13:18:08.942486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.677 [2024-12-06 13:18:09.098679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.936 [2024-12-06 13:18:09.305396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.936 [2024-12-06 13:18:09.305474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.194 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.477 BaseBdev1_malloc 00:24:03.477 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.477 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:03.477 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.477 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.477 [2024-12-06 13:18:09.766703] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:03.477 [2024-12-06 13:18:09.766780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.477 [2024-12-06 13:18:09.766822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:03.477 [2024-12-06 13:18:09.766852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.477 [2024-12-06 13:18:09.769598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.477 [2024-12-06 13:18:09.769651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.477 BaseBdev1 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 BaseBdev2_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 [2024-12-06 13:18:09.818672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:03.478 [2024-12-06 13:18:09.818746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.478 [2024-12-06 13:18:09.818775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:03.478 [2024-12-06 13:18:09.818794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.478 [2024-12-06 13:18:09.821541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.478 [2024-12-06 13:18:09.821592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:03.478 BaseBdev2 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 BaseBdev3_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 [2024-12-06 13:18:09.885441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:03.478 [2024-12-06 13:18:09.885541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.478 [2024-12-06 13:18:09.885576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:03.478 [2024-12-06 13:18:09.885595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.478 [2024-12-06 13:18:09.888409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.478 [2024-12-06 13:18:09.888502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:03.478 BaseBdev3 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 BaseBdev4_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 [2024-12-06 13:18:09.939168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:03.478 [2024-12-06 13:18:09.939244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.478 [2024-12-06 13:18:09.939276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:03.478 [2024-12-06 13:18:09.939294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.478 [2024-12-06 13:18:09.942031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.478 [2024-12-06 13:18:09.942084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:03.478 BaseBdev4 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 spare_malloc 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.478 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.737 spare_delay 00:24:03.737 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.737 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:03.737 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.737 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.737 [2024-12-06 13:18:10.004335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:03.737 [2024-12-06 13:18:10.004405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.737 [2024-12-06 13:18:10.004434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:03.737 [2024-12-06 13:18:10.004469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.737 [2024-12-06 13:18:10.007620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.737 [2024-12-06 13:18:10.007675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:03.737 spare 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.737 [2024-12-06 13:18:10.016534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.737 [2024-12-06 13:18:10.019012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.737 [2024-12-06 13:18:10.019109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.737 [2024-12-06 13:18:10.019195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:03.737 [2024-12-06 13:18:10.019498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:03.737 [2024-12-06 13:18:10.019533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:03.737 [2024-12-06 13:18:10.019879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:03.737 [2024-12-06 13:18:10.026706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:03.737 [2024-12-06 13:18:10.026758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:03.737 [2024-12-06 13:18:10.027024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.737 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.738 "name": "raid_bdev1", 00:24:03.738 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:03.738 "strip_size_kb": 64, 00:24:03.738 "state": "online", 00:24:03.738 "raid_level": "raid5f", 00:24:03.738 "superblock": true, 00:24:03.738 "num_base_bdevs": 4, 00:24:03.738 "num_base_bdevs_discovered": 4, 00:24:03.738 "num_base_bdevs_operational": 4, 00:24:03.738 "base_bdevs_list": [ 00:24:03.738 { 00:24:03.738 "name": "BaseBdev1", 00:24:03.738 "uuid": "f2db8011-3272-512e-98ad-636df52e4ca7", 00:24:03.738 "is_configured": true, 00:24:03.738 "data_offset": 2048, 00:24:03.738 "data_size": 63488 00:24:03.738 }, 00:24:03.738 { 00:24:03.738 "name": "BaseBdev2", 00:24:03.738 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:03.738 "is_configured": true, 00:24:03.738 "data_offset": 2048, 00:24:03.738 "data_size": 63488 00:24:03.738 }, 00:24:03.738 { 00:24:03.738 "name": "BaseBdev3", 00:24:03.738 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:03.738 "is_configured": true, 00:24:03.738 "data_offset": 2048, 00:24:03.738 "data_size": 63488 00:24:03.738 }, 00:24:03.738 { 00:24:03.738 "name": "BaseBdev4", 00:24:03.738 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:03.738 "is_configured": true, 00:24:03.738 "data_offset": 2048, 00:24:03.738 "data_size": 63488 00:24:03.738 } 00:24:03.738 ] 00:24:03.738 }' 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.738 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 [2024-12-06 13:18:10.543015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:04.305 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:04.563 [2024-12-06 13:18:10.918838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:04.563 /dev/nbd0 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:04.563 1+0 records in 00:24:04.563 1+0 records out 00:24:04.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236494 s, 17.3 MB/s 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:24:04.563 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:05.129 496+0 records in 00:24:05.129 496+0 records out 00:24:05.129 97517568 bytes (98 MB, 93 MiB) copied, 0.57823 s, 169 MB/s 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:05.129 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:05.130 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:05.389 [2024-12-06 13:18:11.877160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.389 [2024-12-06 13:18:11.885064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.389 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.390 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.390 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.390 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.649 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.649 "name": "raid_bdev1", 00:24:05.649 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:05.649 "strip_size_kb": 64, 00:24:05.649 "state": "online", 00:24:05.649 "raid_level": "raid5f", 00:24:05.649 "superblock": true, 00:24:05.649 "num_base_bdevs": 4, 00:24:05.649 "num_base_bdevs_discovered": 3, 00:24:05.649 "num_base_bdevs_operational": 3, 00:24:05.649 "base_bdevs_list": [ 00:24:05.649 { 00:24:05.649 "name": null, 00:24:05.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.649 "is_configured": false, 00:24:05.649 "data_offset": 0, 00:24:05.649 "data_size": 63488 00:24:05.649 }, 00:24:05.649 { 00:24:05.649 "name": "BaseBdev2", 00:24:05.649 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:05.649 "is_configured": true, 00:24:05.649 "data_offset": 2048, 00:24:05.649 "data_size": 63488 00:24:05.649 }, 00:24:05.649 { 00:24:05.649 "name": "BaseBdev3", 00:24:05.649 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:05.649 "is_configured": true, 00:24:05.649 "data_offset": 2048, 00:24:05.649 "data_size": 63488 00:24:05.649 }, 00:24:05.649 { 00:24:05.649 "name": "BaseBdev4", 00:24:05.649 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:05.649 "is_configured": true, 00:24:05.649 "data_offset": 2048, 00:24:05.649 "data_size": 63488 00:24:05.649 } 00:24:05.649 ] 00:24:05.649 }' 00:24:05.649 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.649 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:05.908 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.908 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 [2024-12-06 13:18:12.385202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:05.908 [2024-12-06 13:18:12.399609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:24:05.908 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.908 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:05.908 [2024-12-06 13:18:12.408585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.287 "name": "raid_bdev1", 00:24:07.287 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:07.287 "strip_size_kb": 64, 00:24:07.287 "state": "online", 00:24:07.287 "raid_level": "raid5f", 00:24:07.287 "superblock": true, 00:24:07.287 "num_base_bdevs": 4, 00:24:07.287 "num_base_bdevs_discovered": 4, 00:24:07.287 "num_base_bdevs_operational": 4, 00:24:07.287 "process": { 00:24:07.287 "type": "rebuild", 00:24:07.287 "target": "spare", 00:24:07.287 "progress": { 00:24:07.287 "blocks": 17280, 00:24:07.287 "percent": 9 00:24:07.287 } 00:24:07.287 }, 00:24:07.287 "base_bdevs_list": [ 00:24:07.287 { 00:24:07.287 "name": "spare", 00:24:07.287 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:07.287 "is_configured": true, 00:24:07.287 "data_offset": 2048, 00:24:07.287 "data_size": 63488 00:24:07.287 }, 00:24:07.287 { 00:24:07.287 "name": "BaseBdev2", 00:24:07.287 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:07.287 "is_configured": true, 00:24:07.287 "data_offset": 2048, 00:24:07.287 "data_size": 63488 00:24:07.287 }, 00:24:07.287 { 00:24:07.287 "name": "BaseBdev3", 00:24:07.287 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:07.287 "is_configured": true, 00:24:07.287 "data_offset": 2048, 00:24:07.287 "data_size": 63488 00:24:07.287 }, 00:24:07.287 { 00:24:07.287 "name": "BaseBdev4", 00:24:07.287 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:07.287 "is_configured": true, 00:24:07.287 "data_offset": 2048, 00:24:07.287 "data_size": 63488 00:24:07.287 } 00:24:07.287 ] 00:24:07.287 }' 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.287 [2024-12-06 13:18:13.566059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.287 [2024-12-06 13:18:13.620675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:07.287 [2024-12-06 13:18:13.620767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.287 [2024-12-06 13:18:13.620802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.287 [2024-12-06 13:18:13.620816] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:07.287 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.288 "name": "raid_bdev1", 00:24:07.288 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:07.288 "strip_size_kb": 64, 00:24:07.288 "state": "online", 00:24:07.288 "raid_level": "raid5f", 00:24:07.288 "superblock": true, 00:24:07.288 "num_base_bdevs": 4, 00:24:07.288 "num_base_bdevs_discovered": 3, 00:24:07.288 "num_base_bdevs_operational": 3, 00:24:07.288 "base_bdevs_list": [ 00:24:07.288 { 00:24:07.288 "name": null, 00:24:07.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.288 "is_configured": false, 00:24:07.288 "data_offset": 0, 00:24:07.288 "data_size": 63488 00:24:07.288 }, 00:24:07.288 { 00:24:07.288 "name": "BaseBdev2", 00:24:07.288 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:07.288 "is_configured": true, 00:24:07.288 "data_offset": 2048, 00:24:07.288 "data_size": 63488 00:24:07.288 }, 00:24:07.288 { 00:24:07.288 "name": "BaseBdev3", 00:24:07.288 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:07.288 "is_configured": true, 00:24:07.288 "data_offset": 2048, 00:24:07.288 "data_size": 63488 00:24:07.288 }, 00:24:07.288 { 00:24:07.288 "name": "BaseBdev4", 00:24:07.288 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:07.288 "is_configured": true, 00:24:07.288 "data_offset": 2048, 00:24:07.288 "data_size": 63488 00:24:07.288 } 00:24:07.288 ] 00:24:07.288 }' 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.288 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.856 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.856 "name": "raid_bdev1", 00:24:07.856 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:07.856 "strip_size_kb": 64, 00:24:07.856 "state": "online", 00:24:07.856 "raid_level": "raid5f", 00:24:07.856 "superblock": true, 00:24:07.856 "num_base_bdevs": 4, 00:24:07.856 "num_base_bdevs_discovered": 3, 00:24:07.856 "num_base_bdevs_operational": 3, 00:24:07.856 "base_bdevs_list": [ 00:24:07.856 { 00:24:07.856 "name": null, 00:24:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.857 "is_configured": false, 00:24:07.857 "data_offset": 0, 00:24:07.857 "data_size": 63488 00:24:07.857 }, 00:24:07.857 { 00:24:07.857 "name": "BaseBdev2", 00:24:07.857 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:07.857 "is_configured": true, 00:24:07.857 "data_offset": 2048, 00:24:07.857 "data_size": 63488 00:24:07.857 }, 00:24:07.857 { 00:24:07.857 "name": "BaseBdev3", 00:24:07.857 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:07.857 "is_configured": true, 00:24:07.857 "data_offset": 2048, 00:24:07.857 "data_size": 63488 00:24:07.857 }, 00:24:07.857 { 00:24:07.857 "name": "BaseBdev4", 00:24:07.857 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:07.857 "is_configured": true, 00:24:07.857 "data_offset": 2048, 00:24:07.857 "data_size": 63488 00:24:07.857 } 00:24:07.857 ] 00:24:07.857 }' 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.857 [2024-12-06 13:18:14.343758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.857 [2024-12-06 13:18:14.357199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.857 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:07.857 [2024-12-06 13:18:14.365991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.253 "name": "raid_bdev1", 00:24:09.253 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:09.253 "strip_size_kb": 64, 00:24:09.253 "state": "online", 00:24:09.253 "raid_level": "raid5f", 00:24:09.253 "superblock": true, 00:24:09.253 "num_base_bdevs": 4, 00:24:09.253 "num_base_bdevs_discovered": 4, 00:24:09.253 "num_base_bdevs_operational": 4, 00:24:09.253 "process": { 00:24:09.253 "type": "rebuild", 00:24:09.253 "target": "spare", 00:24:09.253 "progress": { 00:24:09.253 "blocks": 17280, 00:24:09.253 "percent": 9 00:24:09.253 } 00:24:09.253 }, 00:24:09.253 "base_bdevs_list": [ 00:24:09.253 { 00:24:09.253 "name": "spare", 00:24:09.253 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev2", 00:24:09.253 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev3", 00:24:09.253 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev4", 00:24:09.253 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 } 00:24:09.253 ] 00:24:09.253 }' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:09.253 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=707 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.253 "name": "raid_bdev1", 00:24:09.253 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:09.253 "strip_size_kb": 64, 00:24:09.253 "state": "online", 00:24:09.253 "raid_level": "raid5f", 00:24:09.253 "superblock": true, 00:24:09.253 "num_base_bdevs": 4, 00:24:09.253 "num_base_bdevs_discovered": 4, 00:24:09.253 "num_base_bdevs_operational": 4, 00:24:09.253 "process": { 00:24:09.253 "type": "rebuild", 00:24:09.253 "target": "spare", 00:24:09.253 "progress": { 00:24:09.253 "blocks": 21120, 00:24:09.253 "percent": 11 00:24:09.253 } 00:24:09.253 }, 00:24:09.253 "base_bdevs_list": [ 00:24:09.253 { 00:24:09.253 "name": "spare", 00:24:09.253 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev2", 00:24:09.253 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev3", 00:24:09.253 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 }, 00:24:09.253 { 00:24:09.253 "name": "BaseBdev4", 00:24:09.253 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:09.253 "is_configured": true, 00:24:09.253 "data_offset": 2048, 00:24:09.253 "data_size": 63488 00:24:09.253 } 00:24:09.253 ] 00:24:09.253 }' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.253 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.188 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.448 "name": "raid_bdev1", 00:24:10.448 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:10.448 "strip_size_kb": 64, 00:24:10.448 "state": "online", 00:24:10.448 "raid_level": "raid5f", 00:24:10.448 "superblock": true, 00:24:10.448 "num_base_bdevs": 4, 00:24:10.448 "num_base_bdevs_discovered": 4, 00:24:10.448 "num_base_bdevs_operational": 4, 00:24:10.448 "process": { 00:24:10.448 "type": "rebuild", 00:24:10.448 "target": "spare", 00:24:10.448 "progress": { 00:24:10.448 "blocks": 44160, 00:24:10.448 "percent": 23 00:24:10.448 } 00:24:10.448 }, 00:24:10.448 "base_bdevs_list": [ 00:24:10.448 { 00:24:10.448 "name": "spare", 00:24:10.448 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:10.448 "is_configured": true, 00:24:10.448 "data_offset": 2048, 00:24:10.448 "data_size": 63488 00:24:10.448 }, 00:24:10.448 { 00:24:10.448 "name": "BaseBdev2", 00:24:10.448 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:10.448 "is_configured": true, 00:24:10.448 "data_offset": 2048, 00:24:10.448 "data_size": 63488 00:24:10.448 }, 00:24:10.448 { 00:24:10.448 "name": "BaseBdev3", 00:24:10.448 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:10.448 "is_configured": true, 00:24:10.448 "data_offset": 2048, 00:24:10.448 "data_size": 63488 00:24:10.448 }, 00:24:10.448 { 00:24:10.448 "name": "BaseBdev4", 00:24:10.448 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:10.448 "is_configured": true, 00:24:10.448 "data_offset": 2048, 00:24:10.448 "data_size": 63488 00:24:10.448 } 00:24:10.448 ] 00:24:10.448 }' 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.448 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.643 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.643 "name": "raid_bdev1", 00:24:11.643 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:11.643 "strip_size_kb": 64, 00:24:11.643 "state": "online", 00:24:11.643 "raid_level": "raid5f", 00:24:11.643 "superblock": true, 00:24:11.643 "num_base_bdevs": 4, 00:24:11.643 "num_base_bdevs_discovered": 4, 00:24:11.643 "num_base_bdevs_operational": 4, 00:24:11.643 "process": { 00:24:11.643 "type": "rebuild", 00:24:11.643 "target": "spare", 00:24:11.643 "progress": { 00:24:11.643 "blocks": 65280, 00:24:11.643 "percent": 34 00:24:11.643 } 00:24:11.643 }, 00:24:11.643 "base_bdevs_list": [ 00:24:11.643 { 00:24:11.643 "name": "spare", 00:24:11.643 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": "BaseBdev2", 00:24:11.643 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": "BaseBdev3", 00:24:11.643 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 }, 00:24:11.643 { 00:24:11.643 "name": "BaseBdev4", 00:24:11.643 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:11.643 "is_configured": true, 00:24:11.643 "data_offset": 2048, 00:24:11.643 "data_size": 63488 00:24:11.643 } 00:24:11.643 ] 00:24:11.643 }' 00:24:11.643 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.643 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.643 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.643 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.643 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.580 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.580 "name": "raid_bdev1", 00:24:12.580 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:12.580 "strip_size_kb": 64, 00:24:12.580 "state": "online", 00:24:12.580 "raid_level": "raid5f", 00:24:12.580 "superblock": true, 00:24:12.580 "num_base_bdevs": 4, 00:24:12.580 "num_base_bdevs_discovered": 4, 00:24:12.580 "num_base_bdevs_operational": 4, 00:24:12.580 "process": { 00:24:12.580 "type": "rebuild", 00:24:12.580 "target": "spare", 00:24:12.580 "progress": { 00:24:12.580 "blocks": 88320, 00:24:12.580 "percent": 46 00:24:12.580 } 00:24:12.580 }, 00:24:12.580 "base_bdevs_list": [ 00:24:12.580 { 00:24:12.580 "name": "spare", 00:24:12.580 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:12.580 "is_configured": true, 00:24:12.580 "data_offset": 2048, 00:24:12.580 "data_size": 63488 00:24:12.580 }, 00:24:12.580 { 00:24:12.580 "name": "BaseBdev2", 00:24:12.580 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:12.580 "is_configured": true, 00:24:12.580 "data_offset": 2048, 00:24:12.580 "data_size": 63488 00:24:12.580 }, 00:24:12.580 { 00:24:12.580 "name": "BaseBdev3", 00:24:12.580 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:12.580 "is_configured": true, 00:24:12.581 "data_offset": 2048, 00:24:12.581 "data_size": 63488 00:24:12.581 }, 00:24:12.581 { 00:24:12.581 "name": "BaseBdev4", 00:24:12.581 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:12.581 "is_configured": true, 00:24:12.581 "data_offset": 2048, 00:24:12.581 "data_size": 63488 00:24:12.581 } 00:24:12.581 ] 00:24:12.581 }' 00:24:12.581 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.840 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.840 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.840 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.840 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.777 "name": "raid_bdev1", 00:24:13.777 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:13.777 "strip_size_kb": 64, 00:24:13.777 "state": "online", 00:24:13.777 "raid_level": "raid5f", 00:24:13.777 "superblock": true, 00:24:13.777 "num_base_bdevs": 4, 00:24:13.777 "num_base_bdevs_discovered": 4, 00:24:13.777 "num_base_bdevs_operational": 4, 00:24:13.777 "process": { 00:24:13.777 "type": "rebuild", 00:24:13.777 "target": "spare", 00:24:13.777 "progress": { 00:24:13.777 "blocks": 109440, 00:24:13.777 "percent": 57 00:24:13.777 } 00:24:13.777 }, 00:24:13.777 "base_bdevs_list": [ 00:24:13.777 { 00:24:13.777 "name": "spare", 00:24:13.777 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:13.777 "is_configured": true, 00:24:13.777 "data_offset": 2048, 00:24:13.777 "data_size": 63488 00:24:13.777 }, 00:24:13.777 { 00:24:13.777 "name": "BaseBdev2", 00:24:13.777 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:13.777 "is_configured": true, 00:24:13.777 "data_offset": 2048, 00:24:13.777 "data_size": 63488 00:24:13.777 }, 00:24:13.777 { 00:24:13.777 "name": "BaseBdev3", 00:24:13.777 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:13.777 "is_configured": true, 00:24:13.777 "data_offset": 2048, 00:24:13.777 "data_size": 63488 00:24:13.777 }, 00:24:13.777 { 00:24:13.777 "name": "BaseBdev4", 00:24:13.777 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:13.777 "is_configured": true, 00:24:13.777 "data_offset": 2048, 00:24:13.777 "data_size": 63488 00:24:13.777 } 00:24:13.777 ] 00:24:13.777 }' 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.777 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.035 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.035 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.972 "name": "raid_bdev1", 00:24:14.972 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:14.972 "strip_size_kb": 64, 00:24:14.972 "state": "online", 00:24:14.972 "raid_level": "raid5f", 00:24:14.972 "superblock": true, 00:24:14.972 "num_base_bdevs": 4, 00:24:14.972 "num_base_bdevs_discovered": 4, 00:24:14.972 "num_base_bdevs_operational": 4, 00:24:14.972 "process": { 00:24:14.972 "type": "rebuild", 00:24:14.972 "target": "spare", 00:24:14.972 "progress": { 00:24:14.972 "blocks": 132480, 00:24:14.972 "percent": 69 00:24:14.972 } 00:24:14.972 }, 00:24:14.972 "base_bdevs_list": [ 00:24:14.972 { 00:24:14.972 "name": "spare", 00:24:14.972 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:14.972 "is_configured": true, 00:24:14.972 "data_offset": 2048, 00:24:14.972 "data_size": 63488 00:24:14.972 }, 00:24:14.972 { 00:24:14.972 "name": "BaseBdev2", 00:24:14.972 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:14.972 "is_configured": true, 00:24:14.972 "data_offset": 2048, 00:24:14.972 "data_size": 63488 00:24:14.972 }, 00:24:14.972 { 00:24:14.972 "name": "BaseBdev3", 00:24:14.972 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:14.972 "is_configured": true, 00:24:14.972 "data_offset": 2048, 00:24:14.972 "data_size": 63488 00:24:14.972 }, 00:24:14.972 { 00:24:14.972 "name": "BaseBdev4", 00:24:14.972 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:14.972 "is_configured": true, 00:24:14.972 "data_offset": 2048, 00:24:14.972 "data_size": 63488 00:24:14.972 } 00:24:14.972 ] 00:24:14.972 }' 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:14.972 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:15.231 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.231 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.168 "name": "raid_bdev1", 00:24:16.168 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:16.168 "strip_size_kb": 64, 00:24:16.168 "state": "online", 00:24:16.168 "raid_level": "raid5f", 00:24:16.168 "superblock": true, 00:24:16.168 "num_base_bdevs": 4, 00:24:16.168 "num_base_bdevs_discovered": 4, 00:24:16.168 "num_base_bdevs_operational": 4, 00:24:16.168 "process": { 00:24:16.168 "type": "rebuild", 00:24:16.168 "target": "spare", 00:24:16.168 "progress": { 00:24:16.168 "blocks": 153600, 00:24:16.168 "percent": 80 00:24:16.168 } 00:24:16.168 }, 00:24:16.168 "base_bdevs_list": [ 00:24:16.168 { 00:24:16.168 "name": "spare", 00:24:16.168 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:16.168 "is_configured": true, 00:24:16.168 "data_offset": 2048, 00:24:16.168 "data_size": 63488 00:24:16.168 }, 00:24:16.168 { 00:24:16.168 "name": "BaseBdev2", 00:24:16.168 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:16.168 "is_configured": true, 00:24:16.168 "data_offset": 2048, 00:24:16.168 "data_size": 63488 00:24:16.168 }, 00:24:16.168 { 00:24:16.168 "name": "BaseBdev3", 00:24:16.168 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:16.168 "is_configured": true, 00:24:16.168 "data_offset": 2048, 00:24:16.168 "data_size": 63488 00:24:16.168 }, 00:24:16.168 { 00:24:16.168 "name": "BaseBdev4", 00:24:16.168 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:16.168 "is_configured": true, 00:24:16.168 "data_offset": 2048, 00:24:16.168 "data_size": 63488 00:24:16.168 } 00:24:16.168 ] 00:24:16.168 }' 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.168 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:17.545 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:17.546 "name": "raid_bdev1", 00:24:17.546 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:17.546 "strip_size_kb": 64, 00:24:17.546 "state": "online", 00:24:17.546 "raid_level": "raid5f", 00:24:17.546 "superblock": true, 00:24:17.546 "num_base_bdevs": 4, 00:24:17.546 "num_base_bdevs_discovered": 4, 00:24:17.546 "num_base_bdevs_operational": 4, 00:24:17.546 "process": { 00:24:17.546 "type": "rebuild", 00:24:17.546 "target": "spare", 00:24:17.546 "progress": { 00:24:17.546 "blocks": 176640, 00:24:17.546 "percent": 92 00:24:17.546 } 00:24:17.546 }, 00:24:17.546 "base_bdevs_list": [ 00:24:17.546 { 00:24:17.546 "name": "spare", 00:24:17.546 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:17.546 "is_configured": true, 00:24:17.546 "data_offset": 2048, 00:24:17.546 "data_size": 63488 00:24:17.546 }, 00:24:17.546 { 00:24:17.546 "name": "BaseBdev2", 00:24:17.546 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:17.546 "is_configured": true, 00:24:17.546 "data_offset": 2048, 00:24:17.546 "data_size": 63488 00:24:17.546 }, 00:24:17.546 { 00:24:17.546 "name": "BaseBdev3", 00:24:17.546 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:17.546 "is_configured": true, 00:24:17.546 "data_offset": 2048, 00:24:17.546 "data_size": 63488 00:24:17.546 }, 00:24:17.546 { 00:24:17.546 "name": "BaseBdev4", 00:24:17.546 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:17.546 "is_configured": true, 00:24:17.546 "data_offset": 2048, 00:24:17.546 "data_size": 63488 00:24:17.546 } 00:24:17.546 ] 00:24:17.546 }' 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.546 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:18.114 [2024-12-06 13:18:24.467350] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:18.114 [2024-12-06 13:18:24.467485] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:18.114 [2024-12-06 13:18:24.467681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:18.373 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.374 "name": "raid_bdev1", 00:24:18.374 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:18.374 "strip_size_kb": 64, 00:24:18.374 "state": "online", 00:24:18.374 "raid_level": "raid5f", 00:24:18.374 "superblock": true, 00:24:18.374 "num_base_bdevs": 4, 00:24:18.374 "num_base_bdevs_discovered": 4, 00:24:18.374 "num_base_bdevs_operational": 4, 00:24:18.374 "base_bdevs_list": [ 00:24:18.374 { 00:24:18.374 "name": "spare", 00:24:18.374 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:18.374 "is_configured": true, 00:24:18.374 "data_offset": 2048, 00:24:18.374 "data_size": 63488 00:24:18.374 }, 00:24:18.374 { 00:24:18.374 "name": "BaseBdev2", 00:24:18.374 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:18.374 "is_configured": true, 00:24:18.374 "data_offset": 2048, 00:24:18.374 "data_size": 63488 00:24:18.374 }, 00:24:18.374 { 00:24:18.374 "name": "BaseBdev3", 00:24:18.374 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:18.374 "is_configured": true, 00:24:18.374 "data_offset": 2048, 00:24:18.374 "data_size": 63488 00:24:18.374 }, 00:24:18.374 { 00:24:18.374 "name": "BaseBdev4", 00:24:18.374 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:18.374 "is_configured": true, 00:24:18.374 "data_offset": 2048, 00:24:18.374 "data_size": 63488 00:24:18.374 } 00:24:18.374 ] 00:24:18.374 }' 00:24:18.374 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.633 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.633 "name": "raid_bdev1", 00:24:18.633 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:18.633 "strip_size_kb": 64, 00:24:18.633 "state": "online", 00:24:18.633 "raid_level": "raid5f", 00:24:18.633 "superblock": true, 00:24:18.633 "num_base_bdevs": 4, 00:24:18.633 "num_base_bdevs_discovered": 4, 00:24:18.633 "num_base_bdevs_operational": 4, 00:24:18.633 "base_bdevs_list": [ 00:24:18.633 { 00:24:18.633 "name": "spare", 00:24:18.633 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:18.633 "is_configured": true, 00:24:18.633 "data_offset": 2048, 00:24:18.633 "data_size": 63488 00:24:18.633 }, 00:24:18.633 { 00:24:18.633 "name": "BaseBdev2", 00:24:18.633 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:18.633 "is_configured": true, 00:24:18.633 "data_offset": 2048, 00:24:18.633 "data_size": 63488 00:24:18.633 }, 00:24:18.633 { 00:24:18.633 "name": "BaseBdev3", 00:24:18.633 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:18.633 "is_configured": true, 00:24:18.633 "data_offset": 2048, 00:24:18.633 "data_size": 63488 00:24:18.633 }, 00:24:18.633 { 00:24:18.633 "name": "BaseBdev4", 00:24:18.633 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:18.633 "is_configured": true, 00:24:18.633 "data_offset": 2048, 00:24:18.633 "data_size": 63488 00:24:18.633 } 00:24:18.633 ] 00:24:18.633 }' 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.633 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.892 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.892 "name": "raid_bdev1", 00:24:18.892 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:18.892 "strip_size_kb": 64, 00:24:18.892 "state": "online", 00:24:18.892 "raid_level": "raid5f", 00:24:18.892 "superblock": true, 00:24:18.892 "num_base_bdevs": 4, 00:24:18.892 "num_base_bdevs_discovered": 4, 00:24:18.892 "num_base_bdevs_operational": 4, 00:24:18.892 "base_bdevs_list": [ 00:24:18.892 { 00:24:18.892 "name": "spare", 00:24:18.892 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:18.892 "is_configured": true, 00:24:18.892 "data_offset": 2048, 00:24:18.892 "data_size": 63488 00:24:18.892 }, 00:24:18.892 { 00:24:18.892 "name": "BaseBdev2", 00:24:18.892 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:18.892 "is_configured": true, 00:24:18.892 "data_offset": 2048, 00:24:18.892 "data_size": 63488 00:24:18.892 }, 00:24:18.892 { 00:24:18.892 "name": "BaseBdev3", 00:24:18.892 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:18.892 "is_configured": true, 00:24:18.892 "data_offset": 2048, 00:24:18.892 "data_size": 63488 00:24:18.892 }, 00:24:18.892 { 00:24:18.892 "name": "BaseBdev4", 00:24:18.892 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:18.892 "is_configured": true, 00:24:18.892 "data_offset": 2048, 00:24:18.892 "data_size": 63488 00:24:18.892 } 00:24:18.892 ] 00:24:18.892 }' 00:24:18.892 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.892 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.151 [2024-12-06 13:18:25.646879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:19.151 [2024-12-06 13:18:25.646928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:19.151 [2024-12-06 13:18:25.647029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.151 [2024-12-06 13:18:25.647155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.151 [2024-12-06 13:18:25.647186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:19.151 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.409 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:19.703 /dev/nbd0 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.703 1+0 records in 00:24:19.703 1+0 records out 00:24:19.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195731 s, 20.9 MB/s 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.703 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:19.987 /dev/nbd1 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.987 1+0 records in 00:24:19.987 1+0 records out 00:24:19.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035106 s, 11.7 MB/s 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.987 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.246 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.507 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.765 [2024-12-06 13:18:27.145566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:20.765 [2024-12-06 13:18:27.145628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.765 [2024-12-06 13:18:27.145662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:20.765 [2024-12-06 13:18:27.145679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.765 [2024-12-06 13:18:27.148787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.765 [2024-12-06 13:18:27.148829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:20.765 [2024-12-06 13:18:27.148957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:20.765 [2024-12-06 13:18:27.149043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.765 [2024-12-06 13:18:27.149233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.765 [2024-12-06 13:18:27.149376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:20.765 [2024-12-06 13:18:27.149540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:20.765 spare 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.765 [2024-12-06 13:18:27.249732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:20.765 [2024-12-06 13:18:27.249779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:20.765 [2024-12-06 13:18:27.250197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:24:20.765 [2024-12-06 13:18:27.256692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:20.765 [2024-12-06 13:18:27.256722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:20.765 [2024-12-06 13:18:27.256993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.765 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.766 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.024 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.024 "name": "raid_bdev1", 00:24:21.024 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:21.024 "strip_size_kb": 64, 00:24:21.024 "state": "online", 00:24:21.024 "raid_level": "raid5f", 00:24:21.024 "superblock": true, 00:24:21.024 "num_base_bdevs": 4, 00:24:21.024 "num_base_bdevs_discovered": 4, 00:24:21.024 "num_base_bdevs_operational": 4, 00:24:21.024 "base_bdevs_list": [ 00:24:21.024 { 00:24:21.024 "name": "spare", 00:24:21.024 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:21.024 "is_configured": true, 00:24:21.024 "data_offset": 2048, 00:24:21.024 "data_size": 63488 00:24:21.024 }, 00:24:21.024 { 00:24:21.024 "name": "BaseBdev2", 00:24:21.024 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:21.024 "is_configured": true, 00:24:21.024 "data_offset": 2048, 00:24:21.024 "data_size": 63488 00:24:21.024 }, 00:24:21.024 { 00:24:21.024 "name": "BaseBdev3", 00:24:21.024 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:21.024 "is_configured": true, 00:24:21.024 "data_offset": 2048, 00:24:21.024 "data_size": 63488 00:24:21.024 }, 00:24:21.024 { 00:24:21.024 "name": "BaseBdev4", 00:24:21.024 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:21.024 "is_configured": true, 00:24:21.024 "data_offset": 2048, 00:24:21.024 "data_size": 63488 00:24:21.024 } 00:24:21.024 ] 00:24:21.024 }' 00:24:21.024 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.024 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.281 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.282 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.539 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.539 "name": "raid_bdev1", 00:24:21.539 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:21.539 "strip_size_kb": 64, 00:24:21.539 "state": "online", 00:24:21.539 "raid_level": "raid5f", 00:24:21.539 "superblock": true, 00:24:21.539 "num_base_bdevs": 4, 00:24:21.539 "num_base_bdevs_discovered": 4, 00:24:21.539 "num_base_bdevs_operational": 4, 00:24:21.539 "base_bdevs_list": [ 00:24:21.539 { 00:24:21.539 "name": "spare", 00:24:21.539 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:21.539 "is_configured": true, 00:24:21.539 "data_offset": 2048, 00:24:21.539 "data_size": 63488 00:24:21.539 }, 00:24:21.539 { 00:24:21.539 "name": "BaseBdev2", 00:24:21.539 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:21.539 "is_configured": true, 00:24:21.539 "data_offset": 2048, 00:24:21.539 "data_size": 63488 00:24:21.539 }, 00:24:21.539 { 00:24:21.539 "name": "BaseBdev3", 00:24:21.539 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:21.539 "is_configured": true, 00:24:21.539 "data_offset": 2048, 00:24:21.539 "data_size": 63488 00:24:21.539 }, 00:24:21.539 { 00:24:21.539 "name": "BaseBdev4", 00:24:21.539 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:21.539 "is_configured": true, 00:24:21.539 "data_offset": 2048, 00:24:21.539 "data_size": 63488 00:24:21.539 } 00:24:21.539 ] 00:24:21.539 }' 00:24:21.539 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.539 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:21.539 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.539 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.540 [2024-12-06 13:18:27.972628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.540 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.540 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.540 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.540 "name": "raid_bdev1", 00:24:21.540 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:21.540 "strip_size_kb": 64, 00:24:21.540 "state": "online", 00:24:21.540 "raid_level": "raid5f", 00:24:21.540 "superblock": true, 00:24:21.540 "num_base_bdevs": 4, 00:24:21.540 "num_base_bdevs_discovered": 3, 00:24:21.540 "num_base_bdevs_operational": 3, 00:24:21.540 "base_bdevs_list": [ 00:24:21.540 { 00:24:21.540 "name": null, 00:24:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.540 "is_configured": false, 00:24:21.540 "data_offset": 0, 00:24:21.540 "data_size": 63488 00:24:21.540 }, 00:24:21.540 { 00:24:21.540 "name": "BaseBdev2", 00:24:21.540 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:21.540 "is_configured": true, 00:24:21.540 "data_offset": 2048, 00:24:21.540 "data_size": 63488 00:24:21.540 }, 00:24:21.540 { 00:24:21.540 "name": "BaseBdev3", 00:24:21.540 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:21.540 "is_configured": true, 00:24:21.540 "data_offset": 2048, 00:24:21.540 "data_size": 63488 00:24:21.540 }, 00:24:21.540 { 00:24:21.540 "name": "BaseBdev4", 00:24:21.540 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:21.540 "is_configured": true, 00:24:21.540 "data_offset": 2048, 00:24:21.540 "data_size": 63488 00:24:21.540 } 00:24:21.540 ] 00:24:21.540 }' 00:24:21.540 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.540 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.106 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:22.106 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.106 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.106 [2024-12-06 13:18:28.488786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.106 [2024-12-06 13:18:28.489058] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:22.106 [2024-12-06 13:18:28.489085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:22.106 [2024-12-06 13:18:28.489136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.106 [2024-12-06 13:18:28.502341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:24:22.106 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.106 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:22.106 [2024-12-06 13:18:28.511068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.041 "name": "raid_bdev1", 00:24:23.041 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:23.041 "strip_size_kb": 64, 00:24:23.041 "state": "online", 00:24:23.041 "raid_level": "raid5f", 00:24:23.041 "superblock": true, 00:24:23.041 "num_base_bdevs": 4, 00:24:23.041 "num_base_bdevs_discovered": 4, 00:24:23.041 "num_base_bdevs_operational": 4, 00:24:23.041 "process": { 00:24:23.041 "type": "rebuild", 00:24:23.041 "target": "spare", 00:24:23.041 "progress": { 00:24:23.041 "blocks": 17280, 00:24:23.041 "percent": 9 00:24:23.041 } 00:24:23.041 }, 00:24:23.041 "base_bdevs_list": [ 00:24:23.041 { 00:24:23.041 "name": "spare", 00:24:23.041 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:23.041 "is_configured": true, 00:24:23.041 "data_offset": 2048, 00:24:23.041 "data_size": 63488 00:24:23.041 }, 00:24:23.041 { 00:24:23.041 "name": "BaseBdev2", 00:24:23.041 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:23.041 "is_configured": true, 00:24:23.041 "data_offset": 2048, 00:24:23.041 "data_size": 63488 00:24:23.041 }, 00:24:23.041 { 00:24:23.041 "name": "BaseBdev3", 00:24:23.041 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:23.041 "is_configured": true, 00:24:23.041 "data_offset": 2048, 00:24:23.041 "data_size": 63488 00:24:23.041 }, 00:24:23.041 { 00:24:23.041 "name": "BaseBdev4", 00:24:23.041 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:23.041 "is_configured": true, 00:24:23.041 "data_offset": 2048, 00:24:23.041 "data_size": 63488 00:24:23.041 } 00:24:23.041 ] 00:24:23.041 }' 00:24:23.041 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.299 [2024-12-06 13:18:29.668809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:23.299 [2024-12-06 13:18:29.723506] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:23.299 [2024-12-06 13:18:29.723596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.299 [2024-12-06 13:18:29.723624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:23.299 [2024-12-06 13:18:29.723638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:23.299 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.300 "name": "raid_bdev1", 00:24:23.300 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:23.300 "strip_size_kb": 64, 00:24:23.300 "state": "online", 00:24:23.300 "raid_level": "raid5f", 00:24:23.300 "superblock": true, 00:24:23.300 "num_base_bdevs": 4, 00:24:23.300 "num_base_bdevs_discovered": 3, 00:24:23.300 "num_base_bdevs_operational": 3, 00:24:23.300 "base_bdevs_list": [ 00:24:23.300 { 00:24:23.300 "name": null, 00:24:23.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.300 "is_configured": false, 00:24:23.300 "data_offset": 0, 00:24:23.300 "data_size": 63488 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "name": "BaseBdev2", 00:24:23.300 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:23.300 "is_configured": true, 00:24:23.300 "data_offset": 2048, 00:24:23.300 "data_size": 63488 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "name": "BaseBdev3", 00:24:23.300 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:23.300 "is_configured": true, 00:24:23.300 "data_offset": 2048, 00:24:23.300 "data_size": 63488 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "name": "BaseBdev4", 00:24:23.300 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:23.300 "is_configured": true, 00:24:23.300 "data_offset": 2048, 00:24:23.300 "data_size": 63488 00:24:23.300 } 00:24:23.300 ] 00:24:23.300 }' 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.300 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.867 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:23.867 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.867 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.867 [2024-12-06 13:18:30.274568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:23.867 [2024-12-06 13:18:30.274672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.867 [2024-12-06 13:18:30.274710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:23.867 [2024-12-06 13:18:30.274731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.867 [2024-12-06 13:18:30.275357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.867 [2024-12-06 13:18:30.275390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:23.867 [2024-12-06 13:18:30.275541] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:23.867 [2024-12-06 13:18:30.275568] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:23.867 [2024-12-06 13:18:30.275586] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:23.867 [2024-12-06 13:18:30.275621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.867 [2024-12-06 13:18:30.288699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:24:23.867 spare 00:24:23.867 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.867 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:23.867 [2024-12-06 13:18:30.297216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.803 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.062 "name": "raid_bdev1", 00:24:25.062 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:25.062 "strip_size_kb": 64, 00:24:25.062 "state": "online", 00:24:25.062 "raid_level": "raid5f", 00:24:25.062 "superblock": true, 00:24:25.062 "num_base_bdevs": 4, 00:24:25.062 "num_base_bdevs_discovered": 4, 00:24:25.062 "num_base_bdevs_operational": 4, 00:24:25.062 "process": { 00:24:25.062 "type": "rebuild", 00:24:25.062 "target": "spare", 00:24:25.062 "progress": { 00:24:25.062 "blocks": 17280, 00:24:25.062 "percent": 9 00:24:25.062 } 00:24:25.062 }, 00:24:25.062 "base_bdevs_list": [ 00:24:25.062 { 00:24:25.062 "name": "spare", 00:24:25.062 "uuid": "b65d1082-4c6c-586a-8a5c-2c79bc8a5b21", 00:24:25.062 "is_configured": true, 00:24:25.062 "data_offset": 2048, 00:24:25.062 "data_size": 63488 00:24:25.062 }, 00:24:25.062 { 00:24:25.062 "name": "BaseBdev2", 00:24:25.062 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:25.062 "is_configured": true, 00:24:25.062 "data_offset": 2048, 00:24:25.062 "data_size": 63488 00:24:25.062 }, 00:24:25.062 { 00:24:25.062 "name": "BaseBdev3", 00:24:25.062 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:25.062 "is_configured": true, 00:24:25.062 "data_offset": 2048, 00:24:25.062 "data_size": 63488 00:24:25.062 }, 00:24:25.062 { 00:24:25.062 "name": "BaseBdev4", 00:24:25.062 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:25.062 "is_configured": true, 00:24:25.062 "data_offset": 2048, 00:24:25.062 "data_size": 63488 00:24:25.062 } 00:24:25.062 ] 00:24:25.062 }' 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.062 [2024-12-06 13:18:31.458660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.062 [2024-12-06 13:18:31.510642] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:25.062 [2024-12-06 13:18:31.510755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.062 [2024-12-06 13:18:31.510788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.062 [2024-12-06 13:18:31.510807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.062 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.327 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.327 "name": "raid_bdev1", 00:24:25.327 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:25.327 "strip_size_kb": 64, 00:24:25.327 "state": "online", 00:24:25.327 "raid_level": "raid5f", 00:24:25.327 "superblock": true, 00:24:25.327 "num_base_bdevs": 4, 00:24:25.327 "num_base_bdevs_discovered": 3, 00:24:25.327 "num_base_bdevs_operational": 3, 00:24:25.327 "base_bdevs_list": [ 00:24:25.327 { 00:24:25.327 "name": null, 00:24:25.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.327 "is_configured": false, 00:24:25.327 "data_offset": 0, 00:24:25.327 "data_size": 63488 00:24:25.327 }, 00:24:25.327 { 00:24:25.327 "name": "BaseBdev2", 00:24:25.327 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:25.327 "is_configured": true, 00:24:25.327 "data_offset": 2048, 00:24:25.327 "data_size": 63488 00:24:25.327 }, 00:24:25.327 { 00:24:25.327 "name": "BaseBdev3", 00:24:25.327 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:25.327 "is_configured": true, 00:24:25.327 "data_offset": 2048, 00:24:25.327 "data_size": 63488 00:24:25.327 }, 00:24:25.327 { 00:24:25.327 "name": "BaseBdev4", 00:24:25.327 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:25.327 "is_configured": true, 00:24:25.327 "data_offset": 2048, 00:24:25.327 "data_size": 63488 00:24:25.327 } 00:24:25.327 ] 00:24:25.327 }' 00:24:25.327 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.327 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.586 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.845 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.845 "name": "raid_bdev1", 00:24:25.845 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:25.845 "strip_size_kb": 64, 00:24:25.845 "state": "online", 00:24:25.845 "raid_level": "raid5f", 00:24:25.845 "superblock": true, 00:24:25.845 "num_base_bdevs": 4, 00:24:25.845 "num_base_bdevs_discovered": 3, 00:24:25.845 "num_base_bdevs_operational": 3, 00:24:25.845 "base_bdevs_list": [ 00:24:25.845 { 00:24:25.845 "name": null, 00:24:25.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.845 "is_configured": false, 00:24:25.845 "data_offset": 0, 00:24:25.845 "data_size": 63488 00:24:25.845 }, 00:24:25.845 { 00:24:25.845 "name": "BaseBdev2", 00:24:25.845 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:25.845 "is_configured": true, 00:24:25.845 "data_offset": 2048, 00:24:25.845 "data_size": 63488 00:24:25.845 }, 00:24:25.845 { 00:24:25.845 "name": "BaseBdev3", 00:24:25.846 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:25.846 "is_configured": true, 00:24:25.846 "data_offset": 2048, 00:24:25.846 "data_size": 63488 00:24:25.846 }, 00:24:25.846 { 00:24:25.846 "name": "BaseBdev4", 00:24:25.846 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:25.846 "is_configured": true, 00:24:25.846 "data_offset": 2048, 00:24:25.846 "data_size": 63488 00:24:25.846 } 00:24:25.846 ] 00:24:25.846 }' 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.846 [2024-12-06 13:18:32.238418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:25.846 [2024-12-06 13:18:32.238498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.846 [2024-12-06 13:18:32.238532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:25.846 [2024-12-06 13:18:32.238548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.846 [2024-12-06 13:18:32.239169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.846 [2024-12-06 13:18:32.239205] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:25.846 [2024-12-06 13:18:32.239320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:25.846 [2024-12-06 13:18:32.239343] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:25.846 [2024-12-06 13:18:32.239360] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:25.846 [2024-12-06 13:18:32.239374] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:25.846 BaseBdev1 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.846 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.783 "name": "raid_bdev1", 00:24:26.783 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:26.783 "strip_size_kb": 64, 00:24:26.783 "state": "online", 00:24:26.783 "raid_level": "raid5f", 00:24:26.783 "superblock": true, 00:24:26.783 "num_base_bdevs": 4, 00:24:26.783 "num_base_bdevs_discovered": 3, 00:24:26.783 "num_base_bdevs_operational": 3, 00:24:26.783 "base_bdevs_list": [ 00:24:26.783 { 00:24:26.783 "name": null, 00:24:26.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.783 "is_configured": false, 00:24:26.783 "data_offset": 0, 00:24:26.783 "data_size": 63488 00:24:26.783 }, 00:24:26.783 { 00:24:26.783 "name": "BaseBdev2", 00:24:26.783 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:26.783 "is_configured": true, 00:24:26.783 "data_offset": 2048, 00:24:26.783 "data_size": 63488 00:24:26.783 }, 00:24:26.783 { 00:24:26.783 "name": "BaseBdev3", 00:24:26.783 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:26.783 "is_configured": true, 00:24:26.783 "data_offset": 2048, 00:24:26.783 "data_size": 63488 00:24:26.783 }, 00:24:26.783 { 00:24:26.783 "name": "BaseBdev4", 00:24:26.783 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:26.783 "is_configured": true, 00:24:26.783 "data_offset": 2048, 00:24:26.783 "data_size": 63488 00:24:26.783 } 00:24:26.783 ] 00:24:26.783 }' 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.783 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:27.352 "name": "raid_bdev1", 00:24:27.352 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:27.352 "strip_size_kb": 64, 00:24:27.352 "state": "online", 00:24:27.352 "raid_level": "raid5f", 00:24:27.352 "superblock": true, 00:24:27.352 "num_base_bdevs": 4, 00:24:27.352 "num_base_bdevs_discovered": 3, 00:24:27.352 "num_base_bdevs_operational": 3, 00:24:27.352 "base_bdevs_list": [ 00:24:27.352 { 00:24:27.352 "name": null, 00:24:27.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.352 "is_configured": false, 00:24:27.352 "data_offset": 0, 00:24:27.352 "data_size": 63488 00:24:27.352 }, 00:24:27.352 { 00:24:27.352 "name": "BaseBdev2", 00:24:27.352 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:27.352 "is_configured": true, 00:24:27.352 "data_offset": 2048, 00:24:27.352 "data_size": 63488 00:24:27.352 }, 00:24:27.352 { 00:24:27.352 "name": "BaseBdev3", 00:24:27.352 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:27.352 "is_configured": true, 00:24:27.352 "data_offset": 2048, 00:24:27.352 "data_size": 63488 00:24:27.352 }, 00:24:27.352 { 00:24:27.352 "name": "BaseBdev4", 00:24:27.352 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:27.352 "is_configured": true, 00:24:27.352 "data_offset": 2048, 00:24:27.352 "data_size": 63488 00:24:27.352 } 00:24:27.352 ] 00:24:27.352 }' 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:27.352 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.611 [2024-12-06 13:18:33.935051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.611 [2024-12-06 13:18:33.935304] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:27.611 [2024-12-06 13:18:33.935339] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:27.611 request: 00:24:27.611 { 00:24:27.611 "base_bdev": "BaseBdev1", 00:24:27.611 "raid_bdev": "raid_bdev1", 00:24:27.611 "method": "bdev_raid_add_base_bdev", 00:24:27.611 "req_id": 1 00:24:27.611 } 00:24:27.611 Got JSON-RPC error response 00:24:27.611 response: 00:24:27.611 { 00:24:27.611 "code": -22, 00:24:27.611 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:27.611 } 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.611 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:28.548 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.549 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.549 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.549 "name": "raid_bdev1", 00:24:28.549 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:28.549 "strip_size_kb": 64, 00:24:28.549 "state": "online", 00:24:28.549 "raid_level": "raid5f", 00:24:28.549 "superblock": true, 00:24:28.549 "num_base_bdevs": 4, 00:24:28.549 "num_base_bdevs_discovered": 3, 00:24:28.549 "num_base_bdevs_operational": 3, 00:24:28.549 "base_bdevs_list": [ 00:24:28.549 { 00:24:28.549 "name": null, 00:24:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.549 "is_configured": false, 00:24:28.549 "data_offset": 0, 00:24:28.549 "data_size": 63488 00:24:28.549 }, 00:24:28.549 { 00:24:28.549 "name": "BaseBdev2", 00:24:28.549 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:28.549 "is_configured": true, 00:24:28.549 "data_offset": 2048, 00:24:28.549 "data_size": 63488 00:24:28.549 }, 00:24:28.549 { 00:24:28.549 "name": "BaseBdev3", 00:24:28.549 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:28.549 "is_configured": true, 00:24:28.549 "data_offset": 2048, 00:24:28.549 "data_size": 63488 00:24:28.549 }, 00:24:28.549 { 00:24:28.549 "name": "BaseBdev4", 00:24:28.549 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:28.549 "is_configured": true, 00:24:28.549 "data_offset": 2048, 00:24:28.549 "data_size": 63488 00:24:28.549 } 00:24:28.549 ] 00:24:28.549 }' 00:24:28.549 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.549 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:29.115 "name": "raid_bdev1", 00:24:29.115 "uuid": "86e29525-7583-4926-a6b9-4b8bb32206ec", 00:24:29.115 "strip_size_kb": 64, 00:24:29.115 "state": "online", 00:24:29.115 "raid_level": "raid5f", 00:24:29.115 "superblock": true, 00:24:29.115 "num_base_bdevs": 4, 00:24:29.115 "num_base_bdevs_discovered": 3, 00:24:29.115 "num_base_bdevs_operational": 3, 00:24:29.115 "base_bdevs_list": [ 00:24:29.115 { 00:24:29.115 "name": null, 00:24:29.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.115 "is_configured": false, 00:24:29.115 "data_offset": 0, 00:24:29.115 "data_size": 63488 00:24:29.115 }, 00:24:29.115 { 00:24:29.115 "name": "BaseBdev2", 00:24:29.115 "uuid": "c5c38481-da09-5827-8fd7-217a1802d650", 00:24:29.115 "is_configured": true, 00:24:29.115 "data_offset": 2048, 00:24:29.115 "data_size": 63488 00:24:29.115 }, 00:24:29.115 { 00:24:29.115 "name": "BaseBdev3", 00:24:29.115 "uuid": "8c868d3b-2c51-5439-8e78-40f56cfce90d", 00:24:29.115 "is_configured": true, 00:24:29.115 "data_offset": 2048, 00:24:29.115 "data_size": 63488 00:24:29.115 }, 00:24:29.115 { 00:24:29.115 "name": "BaseBdev4", 00:24:29.115 "uuid": "975b9249-69b2-55e4-9c88-94b0def1824c", 00:24:29.115 "is_configured": true, 00:24:29.115 "data_offset": 2048, 00:24:29.115 "data_size": 63488 00:24:29.115 } 00:24:29.115 ] 00:24:29.115 }' 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85861 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85861 ']' 00:24:29.115 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85861 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85861 00:24:29.373 killing process with pid 85861 00:24:29.373 Received shutdown signal, test time was about 60.000000 seconds 00:24:29.373 00:24:29.373 Latency(us) 00:24:29.373 [2024-12-06T13:18:35.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.373 [2024-12-06T13:18:35.902Z] =================================================================================================================== 00:24:29.373 [2024-12-06T13:18:35.902Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85861' 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85861 00:24:29.373 [2024-12-06 13:18:35.671957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:29.373 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85861 00:24:29.373 [2024-12-06 13:18:35.672115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.373 [2024-12-06 13:18:35.672242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:29.373 [2024-12-06 13:18:35.672266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:29.632 [2024-12-06 13:18:36.111450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:30.677 13:18:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:30.677 00:24:30.677 real 0m28.560s 00:24:30.677 user 0m37.108s 00:24:30.677 sys 0m2.880s 00:24:30.677 13:18:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.677 ************************************ 00:24:30.677 END TEST raid5f_rebuild_test_sb 00:24:30.677 ************************************ 00:24:30.677 13:18:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.936 13:18:37 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:24:30.936 13:18:37 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:24:30.936 13:18:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:30.936 13:18:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.936 13:18:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:30.936 ************************************ 00:24:30.936 START TEST raid_state_function_test_sb_4k 00:24:30.936 ************************************ 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86681 00:24:30.936 Process raid pid: 86681 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86681' 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86681 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86681 ']' 00:24:30.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.936 13:18:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:30.936 [2024-12-06 13:18:37.343204] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:30.936 [2024-12-06 13:18:37.343369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.195 [2024-12-06 13:18:37.518185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.195 [2024-12-06 13:18:37.653098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.453 [2024-12-06 13:18:37.864529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:31.453 [2024-12-06 13:18:37.864589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.022 [2024-12-06 13:18:38.389966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:32.022 [2024-12-06 13:18:38.390306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:32.022 [2024-12-06 13:18:38.390348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:32.022 [2024-12-06 13:18:38.390378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.022 "name": "Existed_Raid", 00:24:32.022 "uuid": "d0d7180d-609e-44b9-9d7a-f82854736c90", 00:24:32.022 "strip_size_kb": 0, 00:24:32.022 "state": "configuring", 00:24:32.022 "raid_level": "raid1", 00:24:32.022 "superblock": true, 00:24:32.022 "num_base_bdevs": 2, 00:24:32.022 "num_base_bdevs_discovered": 0, 00:24:32.022 "num_base_bdevs_operational": 2, 00:24:32.022 "base_bdevs_list": [ 00:24:32.022 { 00:24:32.022 "name": "BaseBdev1", 00:24:32.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.022 "is_configured": false, 00:24:32.022 "data_offset": 0, 00:24:32.022 "data_size": 0 00:24:32.022 }, 00:24:32.022 { 00:24:32.022 "name": "BaseBdev2", 00:24:32.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.022 "is_configured": false, 00:24:32.022 "data_offset": 0, 00:24:32.022 "data_size": 0 00:24:32.022 } 00:24:32.022 ] 00:24:32.022 }' 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.022 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 [2024-12-06 13:18:38.970070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:32.591 [2024-12-06 13:18:38.970125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 [2024-12-06 13:18:38.982036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:32.591 [2024-12-06 13:18:38.982245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:32.591 [2024-12-06 13:18:38.982421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:32.591 [2024-12-06 13:18:38.982537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 [2024-12-06 13:18:39.032301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.591 BaseBdev1 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 [ 00:24:32.591 { 00:24:32.591 "name": "BaseBdev1", 00:24:32.591 "aliases": [ 00:24:32.591 "75e63ce0-d059-4fd6-b079-94e3c084aa3d" 00:24:32.591 ], 00:24:32.591 "product_name": "Malloc disk", 00:24:32.591 "block_size": 4096, 00:24:32.591 "num_blocks": 8192, 00:24:32.591 "uuid": "75e63ce0-d059-4fd6-b079-94e3c084aa3d", 00:24:32.591 "assigned_rate_limits": { 00:24:32.591 "rw_ios_per_sec": 0, 00:24:32.591 "rw_mbytes_per_sec": 0, 00:24:32.591 "r_mbytes_per_sec": 0, 00:24:32.591 "w_mbytes_per_sec": 0 00:24:32.591 }, 00:24:32.591 "claimed": true, 00:24:32.591 "claim_type": "exclusive_write", 00:24:32.591 "zoned": false, 00:24:32.591 "supported_io_types": { 00:24:32.591 "read": true, 00:24:32.591 "write": true, 00:24:32.591 "unmap": true, 00:24:32.591 "flush": true, 00:24:32.591 "reset": true, 00:24:32.591 "nvme_admin": false, 00:24:32.591 "nvme_io": false, 00:24:32.591 "nvme_io_md": false, 00:24:32.591 "write_zeroes": true, 00:24:32.591 "zcopy": true, 00:24:32.591 "get_zone_info": false, 00:24:32.591 "zone_management": false, 00:24:32.591 "zone_append": false, 00:24:32.591 "compare": false, 00:24:32.591 "compare_and_write": false, 00:24:32.591 "abort": true, 00:24:32.591 "seek_hole": false, 00:24:32.591 "seek_data": false, 00:24:32.591 "copy": true, 00:24:32.591 "nvme_iov_md": false 00:24:32.591 }, 00:24:32.591 "memory_domains": [ 00:24:32.591 { 00:24:32.591 "dma_device_id": "system", 00:24:32.591 "dma_device_type": 1 00:24:32.591 }, 00:24:32.591 { 00:24:32.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.591 "dma_device_type": 2 00:24:32.591 } 00:24:32.591 ], 00:24:32.591 "driver_specific": {} 00:24:32.591 } 00:24:32.591 ] 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.591 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.850 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.850 "name": "Existed_Raid", 00:24:32.850 "uuid": "ef029ec9-0f58-450e-ac24-eb669c2e3c6d", 00:24:32.850 "strip_size_kb": 0, 00:24:32.850 "state": "configuring", 00:24:32.850 "raid_level": "raid1", 00:24:32.850 "superblock": true, 00:24:32.850 "num_base_bdevs": 2, 00:24:32.850 "num_base_bdevs_discovered": 1, 00:24:32.850 "num_base_bdevs_operational": 2, 00:24:32.850 "base_bdevs_list": [ 00:24:32.850 { 00:24:32.850 "name": "BaseBdev1", 00:24:32.850 "uuid": "75e63ce0-d059-4fd6-b079-94e3c084aa3d", 00:24:32.850 "is_configured": true, 00:24:32.850 "data_offset": 256, 00:24:32.850 "data_size": 7936 00:24:32.850 }, 00:24:32.850 { 00:24:32.850 "name": "BaseBdev2", 00:24:32.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.850 "is_configured": false, 00:24:32.850 "data_offset": 0, 00:24:32.850 "data_size": 0 00:24:32.850 } 00:24:32.850 ] 00:24:32.850 }' 00:24:32.850 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.850 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.108 [2024-12-06 13:18:39.588548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:33.108 [2024-12-06 13:18:39.588627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.108 [2024-12-06 13:18:39.596585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:33.108 [2024-12-06 13:18:39.599340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:33.108 [2024-12-06 13:18:39.599399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.108 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.367 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.367 "name": "Existed_Raid", 00:24:33.367 "uuid": "35ead7d6-8885-4999-bffb-99e705a1863b", 00:24:33.367 "strip_size_kb": 0, 00:24:33.367 "state": "configuring", 00:24:33.367 "raid_level": "raid1", 00:24:33.367 "superblock": true, 00:24:33.367 "num_base_bdevs": 2, 00:24:33.367 "num_base_bdevs_discovered": 1, 00:24:33.367 "num_base_bdevs_operational": 2, 00:24:33.367 "base_bdevs_list": [ 00:24:33.367 { 00:24:33.367 "name": "BaseBdev1", 00:24:33.367 "uuid": "75e63ce0-d059-4fd6-b079-94e3c084aa3d", 00:24:33.367 "is_configured": true, 00:24:33.367 "data_offset": 256, 00:24:33.367 "data_size": 7936 00:24:33.367 }, 00:24:33.367 { 00:24:33.367 "name": "BaseBdev2", 00:24:33.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.367 "is_configured": false, 00:24:33.367 "data_offset": 0, 00:24:33.367 "data_size": 0 00:24:33.367 } 00:24:33.367 ] 00:24:33.367 }' 00:24:33.367 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.367 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.626 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:24:33.626 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.626 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.886 [2024-12-06 13:18:40.163486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:33.886 [2024-12-06 13:18:40.164264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:33.886 [2024-12-06 13:18:40.164294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:33.886 BaseBdev2 00:24:33.886 [2024-12-06 13:18:40.164712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:33.886 [2024-12-06 13:18:40.165004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:33.886 [2024-12-06 13:18:40.165034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:33.886 [2024-12-06 13:18:40.165246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.886 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.886 [ 00:24:33.886 { 00:24:33.886 "name": "BaseBdev2", 00:24:33.886 "aliases": [ 00:24:33.886 "c802d90e-f94c-47b3-bace-e18ad084b7dd" 00:24:33.886 ], 00:24:33.886 "product_name": "Malloc disk", 00:24:33.886 "block_size": 4096, 00:24:33.886 "num_blocks": 8192, 00:24:33.886 "uuid": "c802d90e-f94c-47b3-bace-e18ad084b7dd", 00:24:33.886 "assigned_rate_limits": { 00:24:33.886 "rw_ios_per_sec": 0, 00:24:33.886 "rw_mbytes_per_sec": 0, 00:24:33.886 "r_mbytes_per_sec": 0, 00:24:33.886 "w_mbytes_per_sec": 0 00:24:33.886 }, 00:24:33.886 "claimed": true, 00:24:33.886 "claim_type": "exclusive_write", 00:24:33.886 "zoned": false, 00:24:33.886 "supported_io_types": { 00:24:33.887 "read": true, 00:24:33.887 "write": true, 00:24:33.887 "unmap": true, 00:24:33.887 "flush": true, 00:24:33.887 "reset": true, 00:24:33.887 "nvme_admin": false, 00:24:33.887 "nvme_io": false, 00:24:33.887 "nvme_io_md": false, 00:24:33.887 "write_zeroes": true, 00:24:33.887 "zcopy": true, 00:24:33.887 "get_zone_info": false, 00:24:33.887 "zone_management": false, 00:24:33.887 "zone_append": false, 00:24:33.887 "compare": false, 00:24:33.887 "compare_and_write": false, 00:24:33.887 "abort": true, 00:24:33.887 "seek_hole": false, 00:24:33.887 "seek_data": false, 00:24:33.887 "copy": true, 00:24:33.887 "nvme_iov_md": false 00:24:33.887 }, 00:24:33.887 "memory_domains": [ 00:24:33.887 { 00:24:33.887 "dma_device_id": "system", 00:24:33.887 "dma_device_type": 1 00:24:33.887 }, 00:24:33.887 { 00:24:33.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.887 "dma_device_type": 2 00:24:33.887 } 00:24:33.887 ], 00:24:33.887 "driver_specific": {} 00:24:33.887 } 00:24:33.887 ] 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.887 "name": "Existed_Raid", 00:24:33.887 "uuid": "35ead7d6-8885-4999-bffb-99e705a1863b", 00:24:33.887 "strip_size_kb": 0, 00:24:33.887 "state": "online", 00:24:33.887 "raid_level": "raid1", 00:24:33.887 "superblock": true, 00:24:33.887 "num_base_bdevs": 2, 00:24:33.887 "num_base_bdevs_discovered": 2, 00:24:33.887 "num_base_bdevs_operational": 2, 00:24:33.887 "base_bdevs_list": [ 00:24:33.887 { 00:24:33.887 "name": "BaseBdev1", 00:24:33.887 "uuid": "75e63ce0-d059-4fd6-b079-94e3c084aa3d", 00:24:33.887 "is_configured": true, 00:24:33.887 "data_offset": 256, 00:24:33.887 "data_size": 7936 00:24:33.887 }, 00:24:33.887 { 00:24:33.887 "name": "BaseBdev2", 00:24:33.887 "uuid": "c802d90e-f94c-47b3-bace-e18ad084b7dd", 00:24:33.887 "is_configured": true, 00:24:33.887 "data_offset": 256, 00:24:33.887 "data_size": 7936 00:24:33.887 } 00:24:33.887 ] 00:24:33.887 }' 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.887 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.455 [2024-12-06 13:18:40.740175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.455 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:34.455 "name": "Existed_Raid", 00:24:34.455 "aliases": [ 00:24:34.455 "35ead7d6-8885-4999-bffb-99e705a1863b" 00:24:34.455 ], 00:24:34.455 "product_name": "Raid Volume", 00:24:34.455 "block_size": 4096, 00:24:34.455 "num_blocks": 7936, 00:24:34.455 "uuid": "35ead7d6-8885-4999-bffb-99e705a1863b", 00:24:34.455 "assigned_rate_limits": { 00:24:34.455 "rw_ios_per_sec": 0, 00:24:34.455 "rw_mbytes_per_sec": 0, 00:24:34.455 "r_mbytes_per_sec": 0, 00:24:34.455 "w_mbytes_per_sec": 0 00:24:34.455 }, 00:24:34.455 "claimed": false, 00:24:34.455 "zoned": false, 00:24:34.455 "supported_io_types": { 00:24:34.455 "read": true, 00:24:34.455 "write": true, 00:24:34.455 "unmap": false, 00:24:34.455 "flush": false, 00:24:34.455 "reset": true, 00:24:34.455 "nvme_admin": false, 00:24:34.455 "nvme_io": false, 00:24:34.455 "nvme_io_md": false, 00:24:34.455 "write_zeroes": true, 00:24:34.455 "zcopy": false, 00:24:34.455 "get_zone_info": false, 00:24:34.455 "zone_management": false, 00:24:34.455 "zone_append": false, 00:24:34.455 "compare": false, 00:24:34.455 "compare_and_write": false, 00:24:34.455 "abort": false, 00:24:34.455 "seek_hole": false, 00:24:34.455 "seek_data": false, 00:24:34.455 "copy": false, 00:24:34.455 "nvme_iov_md": false 00:24:34.455 }, 00:24:34.455 "memory_domains": [ 00:24:34.455 { 00:24:34.455 "dma_device_id": "system", 00:24:34.455 "dma_device_type": 1 00:24:34.455 }, 00:24:34.455 { 00:24:34.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.455 "dma_device_type": 2 00:24:34.455 }, 00:24:34.455 { 00:24:34.455 "dma_device_id": "system", 00:24:34.455 "dma_device_type": 1 00:24:34.455 }, 00:24:34.455 { 00:24:34.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.455 "dma_device_type": 2 00:24:34.455 } 00:24:34.455 ], 00:24:34.455 "driver_specific": { 00:24:34.455 "raid": { 00:24:34.455 "uuid": "35ead7d6-8885-4999-bffb-99e705a1863b", 00:24:34.455 "strip_size_kb": 0, 00:24:34.455 "state": "online", 00:24:34.455 "raid_level": "raid1", 00:24:34.455 "superblock": true, 00:24:34.455 "num_base_bdevs": 2, 00:24:34.455 "num_base_bdevs_discovered": 2, 00:24:34.455 "num_base_bdevs_operational": 2, 00:24:34.455 "base_bdevs_list": [ 00:24:34.455 { 00:24:34.455 "name": "BaseBdev1", 00:24:34.455 "uuid": "75e63ce0-d059-4fd6-b079-94e3c084aa3d", 00:24:34.455 "is_configured": true, 00:24:34.455 "data_offset": 256, 00:24:34.455 "data_size": 7936 00:24:34.455 }, 00:24:34.455 { 00:24:34.456 "name": "BaseBdev2", 00:24:34.456 "uuid": "c802d90e-f94c-47b3-bace-e18ad084b7dd", 00:24:34.456 "is_configured": true, 00:24:34.456 "data_offset": 256, 00:24:34.456 "data_size": 7936 00:24:34.456 } 00:24:34.456 ] 00:24:34.456 } 00:24:34.456 } 00:24:34.456 }' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:34.456 BaseBdev2' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.456 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.456 [2024-12-06 13:18:40.972004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:34.714 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.714 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:34.714 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:34.714 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.715 "name": "Existed_Raid", 00:24:34.715 "uuid": "35ead7d6-8885-4999-bffb-99e705a1863b", 00:24:34.715 "strip_size_kb": 0, 00:24:34.715 "state": "online", 00:24:34.715 "raid_level": "raid1", 00:24:34.715 "superblock": true, 00:24:34.715 "num_base_bdevs": 2, 00:24:34.715 "num_base_bdevs_discovered": 1, 00:24:34.715 "num_base_bdevs_operational": 1, 00:24:34.715 "base_bdevs_list": [ 00:24:34.715 { 00:24:34.715 "name": null, 00:24:34.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.715 "is_configured": false, 00:24:34.715 "data_offset": 0, 00:24:34.715 "data_size": 7936 00:24:34.715 }, 00:24:34.715 { 00:24:34.715 "name": "BaseBdev2", 00:24:34.715 "uuid": "c802d90e-f94c-47b3-bace-e18ad084b7dd", 00:24:34.715 "is_configured": true, 00:24:34.715 "data_offset": 256, 00:24:34.715 "data_size": 7936 00:24:34.715 } 00:24:34.715 ] 00:24:34.715 }' 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.715 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.283 [2024-12-06 13:18:41.613530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:35.283 [2024-12-06 13:18:41.613696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.283 [2024-12-06 13:18:41.706258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.283 [2024-12-06 13:18:41.706637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.283 [2024-12-06 13:18:41.706827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86681 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86681 ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86681 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86681 00:24:35.283 killing process with pid 86681 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86681' 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86681 00:24:35.283 [2024-12-06 13:18:41.797698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.283 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86681 00:24:35.542 [2024-12-06 13:18:41.813888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:36.479 ************************************ 00:24:36.479 END TEST raid_state_function_test_sb_4k 00:24:36.479 ************************************ 00:24:36.479 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:24:36.479 00:24:36.479 real 0m5.742s 00:24:36.479 user 0m8.591s 00:24:36.479 sys 0m0.834s 00:24:36.479 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.479 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.737 13:18:43 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:24:36.737 13:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:36.737 13:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.737 13:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:36.737 ************************************ 00:24:36.737 START TEST raid_superblock_test_4k 00:24:36.737 ************************************ 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:36.737 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86939 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86939 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86939 ']' 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.738 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:36.738 [2024-12-06 13:18:43.160752] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:36.738 [2024-12-06 13:18:43.161169] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86939 ] 00:24:36.994 [2024-12-06 13:18:43.342966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.994 [2024-12-06 13:18:43.493684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.251 [2024-12-06 13:18:43.717422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.251 [2024-12-06 13:18:43.717534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.817 malloc1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.817 [2024-12-06 13:18:44.206749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:37.817 [2024-12-06 13:18:44.207050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.817 [2024-12-06 13:18:44.207140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:37.817 [2024-12-06 13:18:44.207441] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.817 [2024-12-06 13:18:44.211015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.817 [2024-12-06 13:18:44.211281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:37.817 pt1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.817 malloc2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.817 [2024-12-06 13:18:44.268220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.817 [2024-12-06 13:18:44.268339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.817 [2024-12-06 13:18:44.268385] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:37.817 [2024-12-06 13:18:44.268405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.817 [2024-12-06 13:18:44.271803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.817 [2024-12-06 13:18:44.271871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.817 pt2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.817 [2024-12-06 13:18:44.280357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:37.817 [2024-12-06 13:18:44.283537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:37.817 [2024-12-06 13:18:44.284059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:37.817 [2024-12-06 13:18:44.284093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:37.817 [2024-12-06 13:18:44.284580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:37.817 [2024-12-06 13:18:44.284875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:37.817 [2024-12-06 13:18:44.284906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:37.817 [2024-12-06 13:18:44.285192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.817 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:37.818 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.075 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.075 "name": "raid_bdev1", 00:24:38.075 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:38.075 "strip_size_kb": 0, 00:24:38.075 "state": "online", 00:24:38.075 "raid_level": "raid1", 00:24:38.075 "superblock": true, 00:24:38.075 "num_base_bdevs": 2, 00:24:38.075 "num_base_bdevs_discovered": 2, 00:24:38.075 "num_base_bdevs_operational": 2, 00:24:38.075 "base_bdevs_list": [ 00:24:38.075 { 00:24:38.075 "name": "pt1", 00:24:38.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.075 "is_configured": true, 00:24:38.075 "data_offset": 256, 00:24:38.075 "data_size": 7936 00:24:38.075 }, 00:24:38.075 { 00:24:38.075 "name": "pt2", 00:24:38.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.075 "is_configured": true, 00:24:38.075 "data_offset": 256, 00:24:38.075 "data_size": 7936 00:24:38.075 } 00:24:38.075 ] 00:24:38.075 }' 00:24:38.075 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.075 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.333 [2024-12-06 13:18:44.825795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.333 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:38.591 "name": "raid_bdev1", 00:24:38.591 "aliases": [ 00:24:38.591 "32a4b585-f198-4c60-8618-6d0c3d192439" 00:24:38.591 ], 00:24:38.591 "product_name": "Raid Volume", 00:24:38.591 "block_size": 4096, 00:24:38.591 "num_blocks": 7936, 00:24:38.591 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:38.591 "assigned_rate_limits": { 00:24:38.591 "rw_ios_per_sec": 0, 00:24:38.591 "rw_mbytes_per_sec": 0, 00:24:38.591 "r_mbytes_per_sec": 0, 00:24:38.591 "w_mbytes_per_sec": 0 00:24:38.591 }, 00:24:38.591 "claimed": false, 00:24:38.591 "zoned": false, 00:24:38.591 "supported_io_types": { 00:24:38.591 "read": true, 00:24:38.591 "write": true, 00:24:38.591 "unmap": false, 00:24:38.591 "flush": false, 00:24:38.591 "reset": true, 00:24:38.591 "nvme_admin": false, 00:24:38.591 "nvme_io": false, 00:24:38.591 "nvme_io_md": false, 00:24:38.591 "write_zeroes": true, 00:24:38.591 "zcopy": false, 00:24:38.591 "get_zone_info": false, 00:24:38.591 "zone_management": false, 00:24:38.591 "zone_append": false, 00:24:38.591 "compare": false, 00:24:38.591 "compare_and_write": false, 00:24:38.591 "abort": false, 00:24:38.591 "seek_hole": false, 00:24:38.591 "seek_data": false, 00:24:38.591 "copy": false, 00:24:38.591 "nvme_iov_md": false 00:24:38.591 }, 00:24:38.591 "memory_domains": [ 00:24:38.591 { 00:24:38.591 "dma_device_id": "system", 00:24:38.591 "dma_device_type": 1 00:24:38.591 }, 00:24:38.591 { 00:24:38.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.591 "dma_device_type": 2 00:24:38.591 }, 00:24:38.591 { 00:24:38.591 "dma_device_id": "system", 00:24:38.591 "dma_device_type": 1 00:24:38.591 }, 00:24:38.591 { 00:24:38.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.591 "dma_device_type": 2 00:24:38.591 } 00:24:38.591 ], 00:24:38.591 "driver_specific": { 00:24:38.591 "raid": { 00:24:38.591 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:38.591 "strip_size_kb": 0, 00:24:38.591 "state": "online", 00:24:38.591 "raid_level": "raid1", 00:24:38.591 "superblock": true, 00:24:38.591 "num_base_bdevs": 2, 00:24:38.591 "num_base_bdevs_discovered": 2, 00:24:38.591 "num_base_bdevs_operational": 2, 00:24:38.591 "base_bdevs_list": [ 00:24:38.591 { 00:24:38.591 "name": "pt1", 00:24:38.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.591 "is_configured": true, 00:24:38.591 "data_offset": 256, 00:24:38.591 "data_size": 7936 00:24:38.591 }, 00:24:38.591 { 00:24:38.591 "name": "pt2", 00:24:38.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.591 "is_configured": true, 00:24:38.591 "data_offset": 256, 00:24:38.591 "data_size": 7936 00:24:38.591 } 00:24:38.591 ] 00:24:38.591 } 00:24:38.591 } 00:24:38.591 }' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:38.591 pt2' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.591 13:18:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.591 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:38.591 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:38.591 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:38.591 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:38.591 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.592 [2024-12-06 13:18:45.081735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.592 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=32a4b585-f198-4c60-8618-6d0c3d192439 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 32a4b585-f198-4c60-8618-6d0c3d192439 ']' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 [2024-12-06 13:18:45.133366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.849 [2024-12-06 13:18:45.133401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:38.849 [2024-12-06 13:18:45.133597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:38.849 [2024-12-06 13:18:45.133730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:38.849 [2024-12-06 13:18:45.133756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 [2024-12-06 13:18:45.273565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:38.849 [2024-12-06 13:18:45.276760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:38.849 [2024-12-06 13:18:45.276995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:38.849 [2024-12-06 13:18:45.277261] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:38.849 [2024-12-06 13:18:45.277440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:38.849 [2024-12-06 13:18:45.277641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:38.849 request: 00:24:38.849 { 00:24:38.849 "name": "raid_bdev1", 00:24:38.849 "raid_level": "raid1", 00:24:38.849 "base_bdevs": [ 00:24:38.849 "malloc1", 00:24:38.849 "malloc2" 00:24:38.849 ], 00:24:38.849 "superblock": false, 00:24:38.849 "method": "bdev_raid_create", 00:24:38.849 "req_id": 1 00:24:38.849 } 00:24:38.849 Got JSON-RPC error response 00:24:38.849 response: 00:24:38.849 { 00:24:38.849 "code": -17, 00:24:38.849 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:38.849 } 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 [2024-12-06 13:18:45.342198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:38.849 [2024-12-06 13:18:45.342431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.849 [2024-12-06 13:18:45.342495] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:38.849 [2024-12-06 13:18:45.342520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.849 [2024-12-06 13:18:45.346098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.849 [2024-12-06 13:18:45.346307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:38.849 [2024-12-06 13:18:45.346570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:38.849 [2024-12-06 13:18:45.346785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:38.849 pt1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.849 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.850 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.106 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.106 "name": "raid_bdev1", 00:24:39.106 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:39.106 "strip_size_kb": 0, 00:24:39.106 "state": "configuring", 00:24:39.106 "raid_level": "raid1", 00:24:39.106 "superblock": true, 00:24:39.106 "num_base_bdevs": 2, 00:24:39.106 "num_base_bdevs_discovered": 1, 00:24:39.106 "num_base_bdevs_operational": 2, 00:24:39.106 "base_bdevs_list": [ 00:24:39.106 { 00:24:39.106 "name": "pt1", 00:24:39.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.106 "is_configured": true, 00:24:39.106 "data_offset": 256, 00:24:39.106 "data_size": 7936 00:24:39.106 }, 00:24:39.106 { 00:24:39.106 "name": null, 00:24:39.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.106 "is_configured": false, 00:24:39.106 "data_offset": 256, 00:24:39.106 "data_size": 7936 00:24:39.106 } 00:24:39.106 ] 00:24:39.106 }' 00:24:39.106 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.106 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.363 [2024-12-06 13:18:45.874954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:39.363 [2024-12-06 13:18:45.875229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.363 [2024-12-06 13:18:45.875296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:39.363 [2024-12-06 13:18:45.875324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.363 [2024-12-06 13:18:45.876077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.363 [2024-12-06 13:18:45.876147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:39.363 [2024-12-06 13:18:45.876275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:39.363 [2024-12-06 13:18:45.876335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:39.363 [2024-12-06 13:18:45.876549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:39.363 [2024-12-06 13:18:45.876583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:39.363 [2024-12-06 13:18:45.876953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:39.363 [2024-12-06 13:18:45.877156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:39.363 [2024-12-06 13:18:45.877172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:39.363 [2024-12-06 13:18:45.877431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.363 pt2 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.363 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.621 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.621 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.621 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.621 "name": "raid_bdev1", 00:24:39.621 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:39.621 "strip_size_kb": 0, 00:24:39.621 "state": "online", 00:24:39.621 "raid_level": "raid1", 00:24:39.621 "superblock": true, 00:24:39.621 "num_base_bdevs": 2, 00:24:39.621 "num_base_bdevs_discovered": 2, 00:24:39.621 "num_base_bdevs_operational": 2, 00:24:39.621 "base_bdevs_list": [ 00:24:39.621 { 00:24:39.621 "name": "pt1", 00:24:39.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.621 "is_configured": true, 00:24:39.621 "data_offset": 256, 00:24:39.621 "data_size": 7936 00:24:39.621 }, 00:24:39.621 { 00:24:39.621 "name": "pt2", 00:24:39.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.621 "is_configured": true, 00:24:39.621 "data_offset": 256, 00:24:39.621 "data_size": 7936 00:24:39.621 } 00:24:39.621 ] 00:24:39.621 }' 00:24:39.621 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.621 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.879 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:39.879 [2024-12-06 13:18:46.403460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:40.137 "name": "raid_bdev1", 00:24:40.137 "aliases": [ 00:24:40.137 "32a4b585-f198-4c60-8618-6d0c3d192439" 00:24:40.137 ], 00:24:40.137 "product_name": "Raid Volume", 00:24:40.137 "block_size": 4096, 00:24:40.137 "num_blocks": 7936, 00:24:40.137 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:40.137 "assigned_rate_limits": { 00:24:40.137 "rw_ios_per_sec": 0, 00:24:40.137 "rw_mbytes_per_sec": 0, 00:24:40.137 "r_mbytes_per_sec": 0, 00:24:40.137 "w_mbytes_per_sec": 0 00:24:40.137 }, 00:24:40.137 "claimed": false, 00:24:40.137 "zoned": false, 00:24:40.137 "supported_io_types": { 00:24:40.137 "read": true, 00:24:40.137 "write": true, 00:24:40.137 "unmap": false, 00:24:40.137 "flush": false, 00:24:40.137 "reset": true, 00:24:40.137 "nvme_admin": false, 00:24:40.137 "nvme_io": false, 00:24:40.137 "nvme_io_md": false, 00:24:40.137 "write_zeroes": true, 00:24:40.137 "zcopy": false, 00:24:40.137 "get_zone_info": false, 00:24:40.137 "zone_management": false, 00:24:40.137 "zone_append": false, 00:24:40.137 "compare": false, 00:24:40.137 "compare_and_write": false, 00:24:40.137 "abort": false, 00:24:40.137 "seek_hole": false, 00:24:40.137 "seek_data": false, 00:24:40.137 "copy": false, 00:24:40.137 "nvme_iov_md": false 00:24:40.137 }, 00:24:40.137 "memory_domains": [ 00:24:40.137 { 00:24:40.137 "dma_device_id": "system", 00:24:40.137 "dma_device_type": 1 00:24:40.137 }, 00:24:40.137 { 00:24:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.137 "dma_device_type": 2 00:24:40.137 }, 00:24:40.137 { 00:24:40.137 "dma_device_id": "system", 00:24:40.137 "dma_device_type": 1 00:24:40.137 }, 00:24:40.137 { 00:24:40.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.137 "dma_device_type": 2 00:24:40.137 } 00:24:40.137 ], 00:24:40.137 "driver_specific": { 00:24:40.137 "raid": { 00:24:40.137 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:40.137 "strip_size_kb": 0, 00:24:40.137 "state": "online", 00:24:40.137 "raid_level": "raid1", 00:24:40.137 "superblock": true, 00:24:40.137 "num_base_bdevs": 2, 00:24:40.137 "num_base_bdevs_discovered": 2, 00:24:40.137 "num_base_bdevs_operational": 2, 00:24:40.137 "base_bdevs_list": [ 00:24:40.137 { 00:24:40.137 "name": "pt1", 00:24:40.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:40.137 "is_configured": true, 00:24:40.137 "data_offset": 256, 00:24:40.137 "data_size": 7936 00:24:40.137 }, 00:24:40.137 { 00:24:40.137 "name": "pt2", 00:24:40.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.137 "is_configured": true, 00:24:40.137 "data_offset": 256, 00:24:40.137 "data_size": 7936 00:24:40.137 } 00:24:40.137 ] 00:24:40.137 } 00:24:40.137 } 00:24:40.137 }' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:40.137 pt2' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:40.137 [2024-12-06 13:18:46.655439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 32a4b585-f198-4c60-8618-6d0c3d192439 '!=' 32a4b585-f198-4c60-8618-6d0c3d192439 ']' 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 [2024-12-06 13:18:46.711161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.395 "name": "raid_bdev1", 00:24:40.395 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:40.395 "strip_size_kb": 0, 00:24:40.395 "state": "online", 00:24:40.395 "raid_level": "raid1", 00:24:40.395 "superblock": true, 00:24:40.395 "num_base_bdevs": 2, 00:24:40.395 "num_base_bdevs_discovered": 1, 00:24:40.395 "num_base_bdevs_operational": 1, 00:24:40.395 "base_bdevs_list": [ 00:24:40.395 { 00:24:40.395 "name": null, 00:24:40.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.395 "is_configured": false, 00:24:40.395 "data_offset": 0, 00:24:40.395 "data_size": 7936 00:24:40.395 }, 00:24:40.395 { 00:24:40.395 "name": "pt2", 00:24:40.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.395 "is_configured": true, 00:24:40.395 "data_offset": 256, 00:24:40.395 "data_size": 7936 00:24:40.395 } 00:24:40.395 ] 00:24:40.395 }' 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.395 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.738 [2024-12-06 13:18:47.235486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.738 [2024-12-06 13:18:47.235552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.738 [2024-12-06 13:18:47.235683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.738 [2024-12-06 13:18:47.235770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.738 [2024-12-06 13:18:47.235795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:40.738 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.739 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.739 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 [2024-12-06 13:18:47.323446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:40.998 [2024-12-06 13:18:47.323693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:40.998 [2024-12-06 13:18:47.323736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:40.998 [2024-12-06 13:18:47.323760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:40.998 [2024-12-06 13:18:47.327093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:40.998 [2024-12-06 13:18:47.327349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:40.998 [2024-12-06 13:18:47.327505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:40.998 [2024-12-06 13:18:47.327585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:40.998 [2024-12-06 13:18:47.327803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:40.998 [2024-12-06 13:18:47.327831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:40.998 pt2 00:24:40.998 [2024-12-06 13:18:47.328180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:40.998 [2024-12-06 13:18:47.328420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:40.998 [2024-12-06 13:18:47.328447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.998 [2024-12-06 13:18:47.328675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.998 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.998 "name": "raid_bdev1", 00:24:40.998 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:40.998 "strip_size_kb": 0, 00:24:40.998 "state": "online", 00:24:40.998 "raid_level": "raid1", 00:24:40.998 "superblock": true, 00:24:40.998 "num_base_bdevs": 2, 00:24:40.998 "num_base_bdevs_discovered": 1, 00:24:40.998 "num_base_bdevs_operational": 1, 00:24:40.998 "base_bdevs_list": [ 00:24:40.998 { 00:24:40.998 "name": null, 00:24:40.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.998 "is_configured": false, 00:24:40.998 "data_offset": 256, 00:24:40.999 "data_size": 7936 00:24:40.999 }, 00:24:40.999 { 00:24:40.999 "name": "pt2", 00:24:40.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.999 "is_configured": true, 00:24:40.999 "data_offset": 256, 00:24:40.999 "data_size": 7936 00:24:40.999 } 00:24:40.999 ] 00:24:40.999 }' 00:24:40.999 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.999 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.566 [2024-12-06 13:18:47.843766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.566 [2024-12-06 13:18:47.843815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.566 [2024-12-06 13:18:47.843954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.566 [2024-12-06 13:18:47.844041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.566 [2024-12-06 13:18:47.844061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.566 [2024-12-06 13:18:47.911797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:41.566 [2024-12-06 13:18:47.911922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.566 [2024-12-06 13:18:47.911970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:41.566 [2024-12-06 13:18:47.911992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.566 [2024-12-06 13:18:47.915419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.566 [2024-12-06 13:18:47.915761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:41.566 [2024-12-06 13:18:47.915906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:41.566 [2024-12-06 13:18:47.915992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:41.566 [2024-12-06 13:18:47.916293] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:41.566 [2024-12-06 13:18:47.916315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.566 [2024-12-06 13:18:47.916354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:41.566 [2024-12-06 13:18:47.916429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:41.566 pt1 00:24:41.566 [2024-12-06 13:18:47.916585] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:41.566 [2024-12-06 13:18:47.916604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:41.566 [2024-12-06 13:18:47.916961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:41.566 [2024-12-06 13:18:47.917193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:41.566 [2024-12-06 13:18:47.917288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.566 [2024-12-06 13:18:47.917695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.566 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:41.567 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.567 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.567 "name": "raid_bdev1", 00:24:41.567 "uuid": "32a4b585-f198-4c60-8618-6d0c3d192439", 00:24:41.567 "strip_size_kb": 0, 00:24:41.567 "state": "online", 00:24:41.567 "raid_level": "raid1", 00:24:41.567 "superblock": true, 00:24:41.567 "num_base_bdevs": 2, 00:24:41.567 "num_base_bdevs_discovered": 1, 00:24:41.567 "num_base_bdevs_operational": 1, 00:24:41.567 "base_bdevs_list": [ 00:24:41.567 { 00:24:41.567 "name": null, 00:24:41.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.567 "is_configured": false, 00:24:41.567 "data_offset": 256, 00:24:41.567 "data_size": 7936 00:24:41.567 }, 00:24:41.567 { 00:24:41.567 "name": "pt2", 00:24:41.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:41.567 "is_configured": true, 00:24:41.567 "data_offset": 256, 00:24:41.567 "data_size": 7936 00:24:41.567 } 00:24:41.567 ] 00:24:41.567 }' 00:24:41.567 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.567 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:42.135 [2024-12-06 13:18:48.500485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 32a4b585-f198-4c60-8618-6d0c3d192439 '!=' 32a4b585-f198-4c60-8618-6d0c3d192439 ']' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86939 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86939 ']' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86939 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86939 00:24:42.135 killing process with pid 86939 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86939' 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86939 00:24:42.135 [2024-12-06 13:18:48.590029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:42.135 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86939 00:24:42.135 [2024-12-06 13:18:48.590195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:42.135 [2024-12-06 13:18:48.590325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:42.135 [2024-12-06 13:18:48.590355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:42.420 [2024-12-06 13:18:48.798618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:43.797 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:43.797 00:24:43.797 real 0m6.944s 00:24:43.797 user 0m10.755s 00:24:43.797 sys 0m1.124s 00:24:43.797 ************************************ 00:24:43.797 END TEST raid_superblock_test_4k 00:24:43.797 ************************************ 00:24:43.797 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.797 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.797 13:18:50 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:43.797 13:18:50 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:43.797 13:18:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:43.797 13:18:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.797 13:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:43.797 ************************************ 00:24:43.797 START TEST raid_rebuild_test_sb_4k 00:24:43.797 ************************************ 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:43.797 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87268 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87268 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87268 ']' 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.798 13:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 [2024-12-06 13:18:50.175644] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:24:43.798 [2024-12-06 13:18:50.176098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87268 ] 00:24:43.798 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:43.798 Zero copy mechanism will not be used. 00:24:44.056 [2024-12-06 13:18:50.358187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.056 [2024-12-06 13:18:50.512034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.314 [2024-12-06 13:18:50.761148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.314 [2024-12-06 13:18:50.761246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 BaseBdev1_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 [2024-12-06 13:18:51.273567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:44.882 [2024-12-06 13:18:51.273670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.882 [2024-12-06 13:18:51.273703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:44.882 [2024-12-06 13:18:51.273723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.882 [2024-12-06 13:18:51.276624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.882 [2024-12-06 13:18:51.276686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:44.882 BaseBdev1 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 BaseBdev2_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 [2024-12-06 13:18:51.327496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:44.882 [2024-12-06 13:18:51.327596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.882 [2024-12-06 13:18:51.327644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:44.882 [2024-12-06 13:18:51.327663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.882 [2024-12-06 13:18:51.330719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.882 [2024-12-06 13:18:51.330770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:44.882 BaseBdev2 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 spare_malloc 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:44.882 spare_delay 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.882 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.142 [2024-12-06 13:18:51.410641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:45.142 [2024-12-06 13:18:51.410914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.142 [2024-12-06 13:18:51.410955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:45.142 [2024-12-06 13:18:51.410975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.142 [2024-12-06 13:18:51.414068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.142 [2024-12-06 13:18:51.414311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:45.142 spare 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.142 [2024-12-06 13:18:51.422750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.142 [2024-12-06 13:18:51.425372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.142 [2024-12-06 13:18:51.425850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:45.142 [2024-12-06 13:18:51.425882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:45.142 [2024-12-06 13:18:51.426247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:45.142 [2024-12-06 13:18:51.426540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:45.142 [2024-12-06 13:18:51.426559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:45.142 [2024-12-06 13:18:51.426802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.142 "name": "raid_bdev1", 00:24:45.142 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:45.142 "strip_size_kb": 0, 00:24:45.142 "state": "online", 00:24:45.142 "raid_level": "raid1", 00:24:45.142 "superblock": true, 00:24:45.142 "num_base_bdevs": 2, 00:24:45.142 "num_base_bdevs_discovered": 2, 00:24:45.142 "num_base_bdevs_operational": 2, 00:24:45.142 "base_bdevs_list": [ 00:24:45.142 { 00:24:45.142 "name": "BaseBdev1", 00:24:45.142 "uuid": "126d43ef-00c7-540c-8041-0ff4a3d0bea7", 00:24:45.142 "is_configured": true, 00:24:45.142 "data_offset": 256, 00:24:45.142 "data_size": 7936 00:24:45.142 }, 00:24:45.142 { 00:24:45.142 "name": "BaseBdev2", 00:24:45.142 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:45.142 "is_configured": true, 00:24:45.142 "data_offset": 256, 00:24:45.142 "data_size": 7936 00:24:45.142 } 00:24:45.142 ] 00:24:45.142 }' 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.142 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.709 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:45.709 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:45.709 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.709 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.709 [2024-12-06 13:18:51.987587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.709 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:45.968 [2024-12-06 13:18:52.391202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:45.968 /dev/nbd0 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:45.968 1+0 records in 00:24:45.968 1+0 records out 00:24:45.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663883 s, 6.2 MB/s 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:45.968 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:46.904 7936+0 records in 00:24:46.904 7936+0 records out 00:24:46.904 32505856 bytes (33 MB, 31 MiB) copied, 0.962247 s, 33.8 MB/s 00:24:46.904 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:46.904 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:46.905 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:46.905 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:46.905 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:46.905 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:46.905 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:47.472 [2024-12-06 13:18:53.739950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 [2024-12-06 13:18:53.757037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.472 "name": "raid_bdev1", 00:24:47.472 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:47.472 "strip_size_kb": 0, 00:24:47.472 "state": "online", 00:24:47.472 "raid_level": "raid1", 00:24:47.472 "superblock": true, 00:24:47.472 "num_base_bdevs": 2, 00:24:47.472 "num_base_bdevs_discovered": 1, 00:24:47.472 "num_base_bdevs_operational": 1, 00:24:47.472 "base_bdevs_list": [ 00:24:47.472 { 00:24:47.472 "name": null, 00:24:47.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.472 "is_configured": false, 00:24:47.472 "data_offset": 0, 00:24:47.472 "data_size": 7936 00:24:47.472 }, 00:24:47.472 { 00:24:47.472 "name": "BaseBdev2", 00:24:47.472 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:47.472 "is_configured": true, 00:24:47.472 "data_offset": 256, 00:24:47.472 "data_size": 7936 00:24:47.472 } 00:24:47.472 ] 00:24:47.472 }' 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.472 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.040 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:48.040 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.040 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.040 [2024-12-06 13:18:54.269204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:48.040 [2024-12-06 13:18:54.286890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:48.040 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.040 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:48.040 [2024-12-06 13:18:54.289370] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.977 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.977 "name": "raid_bdev1", 00:24:48.977 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:48.977 "strip_size_kb": 0, 00:24:48.977 "state": "online", 00:24:48.977 "raid_level": "raid1", 00:24:48.977 "superblock": true, 00:24:48.977 "num_base_bdevs": 2, 00:24:48.977 "num_base_bdevs_discovered": 2, 00:24:48.977 "num_base_bdevs_operational": 2, 00:24:48.977 "process": { 00:24:48.977 "type": "rebuild", 00:24:48.977 "target": "spare", 00:24:48.977 "progress": { 00:24:48.977 "blocks": 2560, 00:24:48.977 "percent": 32 00:24:48.977 } 00:24:48.977 }, 00:24:48.978 "base_bdevs_list": [ 00:24:48.978 { 00:24:48.978 "name": "spare", 00:24:48.978 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:48.978 "is_configured": true, 00:24:48.978 "data_offset": 256, 00:24:48.978 "data_size": 7936 00:24:48.978 }, 00:24:48.978 { 00:24:48.978 "name": "BaseBdev2", 00:24:48.978 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:48.978 "is_configured": true, 00:24:48.978 "data_offset": 256, 00:24:48.978 "data_size": 7936 00:24:48.978 } 00:24:48.978 ] 00:24:48.978 }' 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.978 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:48.978 [2024-12-06 13:18:55.451098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.978 [2024-12-06 13:18:55.498337] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:48.978 [2024-12-06 13:18:55.498430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.978 [2024-12-06 13:18:55.498476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:48.978 [2024-12-06 13:18:55.498494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.236 "name": "raid_bdev1", 00:24:49.236 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:49.236 "strip_size_kb": 0, 00:24:49.236 "state": "online", 00:24:49.236 "raid_level": "raid1", 00:24:49.236 "superblock": true, 00:24:49.236 "num_base_bdevs": 2, 00:24:49.236 "num_base_bdevs_discovered": 1, 00:24:49.236 "num_base_bdevs_operational": 1, 00:24:49.236 "base_bdevs_list": [ 00:24:49.236 { 00:24:49.236 "name": null, 00:24:49.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.236 "is_configured": false, 00:24:49.236 "data_offset": 0, 00:24:49.236 "data_size": 7936 00:24:49.236 }, 00:24:49.236 { 00:24:49.236 "name": "BaseBdev2", 00:24:49.236 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:49.236 "is_configured": true, 00:24:49.236 "data_offset": 256, 00:24:49.236 "data_size": 7936 00:24:49.236 } 00:24:49.236 ] 00:24:49.236 }' 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.236 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.802 "name": "raid_bdev1", 00:24:49.802 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:49.802 "strip_size_kb": 0, 00:24:49.802 "state": "online", 00:24:49.802 "raid_level": "raid1", 00:24:49.802 "superblock": true, 00:24:49.802 "num_base_bdevs": 2, 00:24:49.802 "num_base_bdevs_discovered": 1, 00:24:49.802 "num_base_bdevs_operational": 1, 00:24:49.802 "base_bdevs_list": [ 00:24:49.802 { 00:24:49.802 "name": null, 00:24:49.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.802 "is_configured": false, 00:24:49.802 "data_offset": 0, 00:24:49.802 "data_size": 7936 00:24:49.802 }, 00:24:49.802 { 00:24:49.802 "name": "BaseBdev2", 00:24:49.802 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:49.802 "is_configured": true, 00:24:49.802 "data_offset": 256, 00:24:49.802 "data_size": 7936 00:24:49.802 } 00:24:49.802 ] 00:24:49.802 }' 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 [2024-12-06 13:18:56.214342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.802 [2024-12-06 13:18:56.230238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.802 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:49.802 [2024-12-06 13:18:56.232750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.769 "name": "raid_bdev1", 00:24:50.769 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:50.769 "strip_size_kb": 0, 00:24:50.769 "state": "online", 00:24:50.769 "raid_level": "raid1", 00:24:50.769 "superblock": true, 00:24:50.769 "num_base_bdevs": 2, 00:24:50.769 "num_base_bdevs_discovered": 2, 00:24:50.769 "num_base_bdevs_operational": 2, 00:24:50.769 "process": { 00:24:50.769 "type": "rebuild", 00:24:50.769 "target": "spare", 00:24:50.769 "progress": { 00:24:50.769 "blocks": 2560, 00:24:50.769 "percent": 32 00:24:50.769 } 00:24:50.769 }, 00:24:50.769 "base_bdevs_list": [ 00:24:50.769 { 00:24:50.769 "name": "spare", 00:24:50.769 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:50.769 "is_configured": true, 00:24:50.769 "data_offset": 256, 00:24:50.769 "data_size": 7936 00:24:50.769 }, 00:24:50.769 { 00:24:50.769 "name": "BaseBdev2", 00:24:50.769 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:50.769 "is_configured": true, 00:24:50.769 "data_offset": 256, 00:24:50.769 "data_size": 7936 00:24:50.769 } 00:24:50.769 ] 00:24:50.769 }' 00:24:50.769 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:51.033 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=749 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.033 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.033 "name": "raid_bdev1", 00:24:51.033 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:51.033 "strip_size_kb": 0, 00:24:51.033 "state": "online", 00:24:51.033 "raid_level": "raid1", 00:24:51.033 "superblock": true, 00:24:51.033 "num_base_bdevs": 2, 00:24:51.033 "num_base_bdevs_discovered": 2, 00:24:51.033 "num_base_bdevs_operational": 2, 00:24:51.033 "process": { 00:24:51.033 "type": "rebuild", 00:24:51.033 "target": "spare", 00:24:51.033 "progress": { 00:24:51.033 "blocks": 2816, 00:24:51.033 "percent": 35 00:24:51.033 } 00:24:51.033 }, 00:24:51.033 "base_bdevs_list": [ 00:24:51.033 { 00:24:51.033 "name": "spare", 00:24:51.033 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 256, 00:24:51.033 "data_size": 7936 00:24:51.033 }, 00:24:51.033 { 00:24:51.033 "name": "BaseBdev2", 00:24:51.033 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:51.033 "is_configured": true, 00:24:51.033 "data_offset": 256, 00:24:51.033 "data_size": 7936 00:24:51.033 } 00:24:51.033 ] 00:24:51.033 }' 00:24:51.034 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.034 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.034 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.034 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.034 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.413 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.413 "name": "raid_bdev1", 00:24:52.413 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:52.413 "strip_size_kb": 0, 00:24:52.413 "state": "online", 00:24:52.413 "raid_level": "raid1", 00:24:52.413 "superblock": true, 00:24:52.413 "num_base_bdevs": 2, 00:24:52.413 "num_base_bdevs_discovered": 2, 00:24:52.413 "num_base_bdevs_operational": 2, 00:24:52.413 "process": { 00:24:52.413 "type": "rebuild", 00:24:52.413 "target": "spare", 00:24:52.413 "progress": { 00:24:52.413 "blocks": 5632, 00:24:52.413 "percent": 70 00:24:52.413 } 00:24:52.413 }, 00:24:52.413 "base_bdevs_list": [ 00:24:52.413 { 00:24:52.413 "name": "spare", 00:24:52.413 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:52.413 "is_configured": true, 00:24:52.413 "data_offset": 256, 00:24:52.413 "data_size": 7936 00:24:52.413 }, 00:24:52.413 { 00:24:52.413 "name": "BaseBdev2", 00:24:52.413 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:52.413 "is_configured": true, 00:24:52.413 "data_offset": 256, 00:24:52.413 "data_size": 7936 00:24:52.413 } 00:24:52.413 ] 00:24:52.414 }' 00:24:52.414 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.414 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:52.414 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.414 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:52.414 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:52.981 [2024-12-06 13:18:59.355464] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:52.981 [2024-12-06 13:18:59.355604] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:52.981 [2024-12-06 13:18:59.355776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.239 "name": "raid_bdev1", 00:24:53.239 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:53.239 "strip_size_kb": 0, 00:24:53.239 "state": "online", 00:24:53.239 "raid_level": "raid1", 00:24:53.239 "superblock": true, 00:24:53.239 "num_base_bdevs": 2, 00:24:53.239 "num_base_bdevs_discovered": 2, 00:24:53.239 "num_base_bdevs_operational": 2, 00:24:53.239 "base_bdevs_list": [ 00:24:53.239 { 00:24:53.239 "name": "spare", 00:24:53.239 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:53.239 "is_configured": true, 00:24:53.239 "data_offset": 256, 00:24:53.239 "data_size": 7936 00:24:53.239 }, 00:24:53.239 { 00:24:53.239 "name": "BaseBdev2", 00:24:53.239 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:53.239 "is_configured": true, 00:24:53.239 "data_offset": 256, 00:24:53.239 "data_size": 7936 00:24:53.239 } 00:24:53.239 ] 00:24:53.239 }' 00:24:53.239 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.498 "name": "raid_bdev1", 00:24:53.498 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:53.498 "strip_size_kb": 0, 00:24:53.498 "state": "online", 00:24:53.498 "raid_level": "raid1", 00:24:53.498 "superblock": true, 00:24:53.498 "num_base_bdevs": 2, 00:24:53.498 "num_base_bdevs_discovered": 2, 00:24:53.498 "num_base_bdevs_operational": 2, 00:24:53.498 "base_bdevs_list": [ 00:24:53.498 { 00:24:53.498 "name": "spare", 00:24:53.498 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:53.498 "is_configured": true, 00:24:53.498 "data_offset": 256, 00:24:53.498 "data_size": 7936 00:24:53.498 }, 00:24:53.498 { 00:24:53.498 "name": "BaseBdev2", 00:24:53.498 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:53.498 "is_configured": true, 00:24:53.498 "data_offset": 256, 00:24:53.498 "data_size": 7936 00:24:53.498 } 00:24:53.498 ] 00:24:53.498 }' 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:53.498 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.498 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:53.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.771 "name": "raid_bdev1", 00:24:53.771 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:53.771 "strip_size_kb": 0, 00:24:53.771 "state": "online", 00:24:53.771 "raid_level": "raid1", 00:24:53.771 "superblock": true, 00:24:53.771 "num_base_bdevs": 2, 00:24:53.771 "num_base_bdevs_discovered": 2, 00:24:53.771 "num_base_bdevs_operational": 2, 00:24:53.771 "base_bdevs_list": [ 00:24:53.771 { 00:24:53.771 "name": "spare", 00:24:53.771 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:53.771 "is_configured": true, 00:24:53.771 "data_offset": 256, 00:24:53.771 "data_size": 7936 00:24:53.771 }, 00:24:53.771 { 00:24:53.771 "name": "BaseBdev2", 00:24:53.771 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:53.771 "is_configured": true, 00:24:53.771 "data_offset": 256, 00:24:53.771 "data_size": 7936 00:24:53.771 } 00:24:53.771 ] 00:24:53.771 }' 00:24:53.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.028 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:54.028 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.028 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.028 [2024-12-06 13:19:00.540579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:54.029 [2024-12-06 13:19:00.540618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:54.029 [2024-12-06 13:19:00.540718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.029 [2024-12-06 13:19:00.540826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.029 [2024-12-06 13:19:00.540847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:54.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:54.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.286 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:54.544 /dev/nbd0 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.544 1+0 records in 00:24:54.544 1+0 records out 00:24:54.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616291 s, 6.6 MB/s 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.544 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:54.802 /dev/nbd1 00:24:54.802 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:54.802 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.803 1+0 records in 00:24:54.803 1+0 records out 00:24:54.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348427 s, 11.8 MB/s 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.803 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.061 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.318 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.577 [2024-12-06 13:19:02.058548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:55.577 [2024-12-06 13:19:02.058770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.577 [2024-12-06 13:19:02.058818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:55.577 [2024-12-06 13:19:02.058842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.577 [2024-12-06 13:19:02.062080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.577 [2024-12-06 13:19:02.062269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:55.577 [2024-12-06 13:19:02.062428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:55.577 [2024-12-06 13:19:02.062520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.577 [2024-12-06 13:19:02.062732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.577 spare 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.577 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.905 [2024-12-06 13:19:02.162937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:55.905 [2024-12-06 13:19:02.163002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:55.905 [2024-12-06 13:19:02.163495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:55.905 [2024-12-06 13:19:02.163807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:55.905 [2024-12-06 13:19:02.163831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:55.905 [2024-12-06 13:19:02.164094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.905 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.905 "name": "raid_bdev1", 00:24:55.905 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:55.905 "strip_size_kb": 0, 00:24:55.905 "state": "online", 00:24:55.905 "raid_level": "raid1", 00:24:55.905 "superblock": true, 00:24:55.905 "num_base_bdevs": 2, 00:24:55.905 "num_base_bdevs_discovered": 2, 00:24:55.905 "num_base_bdevs_operational": 2, 00:24:55.905 "base_bdevs_list": [ 00:24:55.905 { 00:24:55.905 "name": "spare", 00:24:55.906 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:55.906 "is_configured": true, 00:24:55.906 "data_offset": 256, 00:24:55.906 "data_size": 7936 00:24:55.906 }, 00:24:55.906 { 00:24:55.906 "name": "BaseBdev2", 00:24:55.906 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:55.906 "is_configured": true, 00:24:55.906 "data_offset": 256, 00:24:55.906 "data_size": 7936 00:24:55.906 } 00:24:55.906 ] 00:24:55.906 }' 00:24:55.906 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.906 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.164 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:56.422 "name": "raid_bdev1", 00:24:56.422 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:56.422 "strip_size_kb": 0, 00:24:56.422 "state": "online", 00:24:56.422 "raid_level": "raid1", 00:24:56.422 "superblock": true, 00:24:56.422 "num_base_bdevs": 2, 00:24:56.422 "num_base_bdevs_discovered": 2, 00:24:56.422 "num_base_bdevs_operational": 2, 00:24:56.422 "base_bdevs_list": [ 00:24:56.422 { 00:24:56.422 "name": "spare", 00:24:56.422 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:56.422 "is_configured": true, 00:24:56.422 "data_offset": 256, 00:24:56.422 "data_size": 7936 00:24:56.422 }, 00:24:56.422 { 00:24:56.422 "name": "BaseBdev2", 00:24:56.422 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:56.422 "is_configured": true, 00:24:56.422 "data_offset": 256, 00:24:56.422 "data_size": 7936 00:24:56.422 } 00:24:56.422 ] 00:24:56.422 }' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.422 [2024-12-06 13:19:02.855095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.422 "name": "raid_bdev1", 00:24:56.422 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:56.422 "strip_size_kb": 0, 00:24:56.422 "state": "online", 00:24:56.422 "raid_level": "raid1", 00:24:56.422 "superblock": true, 00:24:56.422 "num_base_bdevs": 2, 00:24:56.422 "num_base_bdevs_discovered": 1, 00:24:56.422 "num_base_bdevs_operational": 1, 00:24:56.422 "base_bdevs_list": [ 00:24:56.422 { 00:24:56.422 "name": null, 00:24:56.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.422 "is_configured": false, 00:24:56.422 "data_offset": 0, 00:24:56.422 "data_size": 7936 00:24:56.422 }, 00:24:56.422 { 00:24:56.422 "name": "BaseBdev2", 00:24:56.422 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:56.422 "is_configured": true, 00:24:56.422 "data_offset": 256, 00:24:56.422 "data_size": 7936 00:24:56.422 } 00:24:56.422 ] 00:24:56.422 }' 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.422 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.986 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:56.986 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.987 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:56.987 [2024-12-06 13:19:03.363302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.987 [2024-12-06 13:19:03.363588] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:56.987 [2024-12-06 13:19:03.363615] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:56.987 [2024-12-06 13:19:03.363659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.987 [2024-12-06 13:19:03.379431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:56.987 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.987 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:56.987 [2024-12-06 13:19:03.382042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:57.936 "name": "raid_bdev1", 00:24:57.936 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:57.936 "strip_size_kb": 0, 00:24:57.936 "state": "online", 00:24:57.936 "raid_level": "raid1", 00:24:57.936 "superblock": true, 00:24:57.936 "num_base_bdevs": 2, 00:24:57.936 "num_base_bdevs_discovered": 2, 00:24:57.936 "num_base_bdevs_operational": 2, 00:24:57.936 "process": { 00:24:57.936 "type": "rebuild", 00:24:57.936 "target": "spare", 00:24:57.936 "progress": { 00:24:57.936 "blocks": 2560, 00:24:57.936 "percent": 32 00:24:57.936 } 00:24:57.936 }, 00:24:57.936 "base_bdevs_list": [ 00:24:57.936 { 00:24:57.936 "name": "spare", 00:24:57.936 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:57.936 "is_configured": true, 00:24:57.936 "data_offset": 256, 00:24:57.936 "data_size": 7936 00:24:57.936 }, 00:24:57.936 { 00:24:57.936 "name": "BaseBdev2", 00:24:57.936 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:57.936 "is_configured": true, 00:24:57.936 "data_offset": 256, 00:24:57.936 "data_size": 7936 00:24:57.936 } 00:24:57.936 ] 00:24:57.936 }' 00:24:57.936 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.194 [2024-12-06 13:19:04.555735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.194 [2024-12-06 13:19:04.591046] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:58.194 [2024-12-06 13:19:04.591141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.194 [2024-12-06 13:19:04.591168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.194 [2024-12-06 13:19:04.591184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.194 "name": "raid_bdev1", 00:24:58.194 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:58.194 "strip_size_kb": 0, 00:24:58.194 "state": "online", 00:24:58.194 "raid_level": "raid1", 00:24:58.194 "superblock": true, 00:24:58.194 "num_base_bdevs": 2, 00:24:58.194 "num_base_bdevs_discovered": 1, 00:24:58.194 "num_base_bdevs_operational": 1, 00:24:58.194 "base_bdevs_list": [ 00:24:58.194 { 00:24:58.194 "name": null, 00:24:58.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.194 "is_configured": false, 00:24:58.194 "data_offset": 0, 00:24:58.194 "data_size": 7936 00:24:58.194 }, 00:24:58.194 { 00:24:58.194 "name": "BaseBdev2", 00:24:58.194 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:58.194 "is_configured": true, 00:24:58.194 "data_offset": 256, 00:24:58.194 "data_size": 7936 00:24:58.194 } 00:24:58.194 ] 00:24:58.194 }' 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.194 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.760 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:58.760 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.760 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:58.760 [2024-12-06 13:19:05.115201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:58.760 [2024-12-06 13:19:05.115431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.760 [2024-12-06 13:19:05.115492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:58.760 [2024-12-06 13:19:05.115514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.760 [2024-12-06 13:19:05.116130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.760 [2024-12-06 13:19:05.116170] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:58.760 [2024-12-06 13:19:05.116293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:58.760 [2024-12-06 13:19:05.116326] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:58.760 [2024-12-06 13:19:05.116341] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:58.760 [2024-12-06 13:19:05.116376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:58.760 [2024-12-06 13:19:05.131968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:58.760 spare 00:24:58.760 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.760 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:58.760 [2024-12-06 13:19:05.134527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.693 "name": "raid_bdev1", 00:24:59.693 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:59.693 "strip_size_kb": 0, 00:24:59.693 "state": "online", 00:24:59.693 "raid_level": "raid1", 00:24:59.693 "superblock": true, 00:24:59.693 "num_base_bdevs": 2, 00:24:59.693 "num_base_bdevs_discovered": 2, 00:24:59.693 "num_base_bdevs_operational": 2, 00:24:59.693 "process": { 00:24:59.693 "type": "rebuild", 00:24:59.693 "target": "spare", 00:24:59.693 "progress": { 00:24:59.693 "blocks": 2560, 00:24:59.693 "percent": 32 00:24:59.693 } 00:24:59.693 }, 00:24:59.693 "base_bdevs_list": [ 00:24:59.693 { 00:24:59.693 "name": "spare", 00:24:59.693 "uuid": "fb2fb2fa-7747-51c5-820e-692e2213174c", 00:24:59.693 "is_configured": true, 00:24:59.693 "data_offset": 256, 00:24:59.693 "data_size": 7936 00:24:59.693 }, 00:24:59.693 { 00:24:59.693 "name": "BaseBdev2", 00:24:59.693 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:59.693 "is_configured": true, 00:24:59.693 "data_offset": 256, 00:24:59.693 "data_size": 7936 00:24:59.693 } 00:24:59.693 ] 00:24:59.693 }' 00:24:59.693 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.952 [2024-12-06 13:19:06.303872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:59.952 [2024-12-06 13:19:06.343791] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:59.952 [2024-12-06 13:19:06.344047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:59.952 [2024-12-06 13:19:06.344196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:59.952 [2024-12-06 13:19:06.344269] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:59.952 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.953 "name": "raid_bdev1", 00:24:59.953 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:24:59.953 "strip_size_kb": 0, 00:24:59.953 "state": "online", 00:24:59.953 "raid_level": "raid1", 00:24:59.953 "superblock": true, 00:24:59.953 "num_base_bdevs": 2, 00:24:59.953 "num_base_bdevs_discovered": 1, 00:24:59.953 "num_base_bdevs_operational": 1, 00:24:59.953 "base_bdevs_list": [ 00:24:59.953 { 00:24:59.953 "name": null, 00:24:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.953 "is_configured": false, 00:24:59.953 "data_offset": 0, 00:24:59.953 "data_size": 7936 00:24:59.953 }, 00:24:59.953 { 00:24:59.953 "name": "BaseBdev2", 00:24:59.953 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:24:59.953 "is_configured": true, 00:24:59.953 "data_offset": 256, 00:24:59.953 "data_size": 7936 00:24:59.953 } 00:24:59.953 ] 00:24:59.953 }' 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.953 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:00.521 "name": "raid_bdev1", 00:25:00.521 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:25:00.521 "strip_size_kb": 0, 00:25:00.521 "state": "online", 00:25:00.521 "raid_level": "raid1", 00:25:00.521 "superblock": true, 00:25:00.521 "num_base_bdevs": 2, 00:25:00.521 "num_base_bdevs_discovered": 1, 00:25:00.521 "num_base_bdevs_operational": 1, 00:25:00.521 "base_bdevs_list": [ 00:25:00.521 { 00:25:00.521 "name": null, 00:25:00.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.521 "is_configured": false, 00:25:00.521 "data_offset": 0, 00:25:00.521 "data_size": 7936 00:25:00.521 }, 00:25:00.521 { 00:25:00.521 "name": "BaseBdev2", 00:25:00.521 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:25:00.521 "is_configured": true, 00:25:00.521 "data_offset": 256, 00:25:00.521 "data_size": 7936 00:25:00.521 } 00:25:00.521 ] 00:25:00.521 }' 00:25:00.521 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:00.521 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:00.521 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:00.779 [2024-12-06 13:19:07.072220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:00.779 [2024-12-06 13:19:07.072293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.779 [2024-12-06 13:19:07.072328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:00.779 [2024-12-06 13:19:07.072354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.779 [2024-12-06 13:19:07.072980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.779 [2024-12-06 13:19:07.073013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:00.779 [2024-12-06 13:19:07.073116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:00.779 [2024-12-06 13:19:07.073146] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:00.779 [2024-12-06 13:19:07.073163] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:00.779 [2024-12-06 13:19:07.073176] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:00.779 BaseBdev1 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.779 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.736 "name": "raid_bdev1", 00:25:01.736 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:25:01.736 "strip_size_kb": 0, 00:25:01.736 "state": "online", 00:25:01.736 "raid_level": "raid1", 00:25:01.736 "superblock": true, 00:25:01.736 "num_base_bdevs": 2, 00:25:01.736 "num_base_bdevs_discovered": 1, 00:25:01.736 "num_base_bdevs_operational": 1, 00:25:01.736 "base_bdevs_list": [ 00:25:01.736 { 00:25:01.736 "name": null, 00:25:01.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.736 "is_configured": false, 00:25:01.736 "data_offset": 0, 00:25:01.736 "data_size": 7936 00:25:01.736 }, 00:25:01.736 { 00:25:01.736 "name": "BaseBdev2", 00:25:01.736 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:25:01.736 "is_configured": true, 00:25:01.736 "data_offset": 256, 00:25:01.736 "data_size": 7936 00:25:01.736 } 00:25:01.736 ] 00:25:01.736 }' 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.736 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:02.347 "name": "raid_bdev1", 00:25:02.347 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:25:02.347 "strip_size_kb": 0, 00:25:02.347 "state": "online", 00:25:02.347 "raid_level": "raid1", 00:25:02.347 "superblock": true, 00:25:02.347 "num_base_bdevs": 2, 00:25:02.347 "num_base_bdevs_discovered": 1, 00:25:02.347 "num_base_bdevs_operational": 1, 00:25:02.347 "base_bdevs_list": [ 00:25:02.347 { 00:25:02.347 "name": null, 00:25:02.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.347 "is_configured": false, 00:25:02.347 "data_offset": 0, 00:25:02.347 "data_size": 7936 00:25:02.347 }, 00:25:02.347 { 00:25:02.347 "name": "BaseBdev2", 00:25:02.347 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:25:02.347 "is_configured": true, 00:25:02.347 "data_offset": 256, 00:25:02.347 "data_size": 7936 00:25:02.347 } 00:25:02.347 ] 00:25:02.347 }' 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:02.347 [2024-12-06 13:19:08.752810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.347 [2024-12-06 13:19:08.753186] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:02.347 [2024-12-06 13:19:08.753217] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:02.347 request: 00:25:02.347 { 00:25:02.347 "base_bdev": "BaseBdev1", 00:25:02.347 "raid_bdev": "raid_bdev1", 00:25:02.347 "method": "bdev_raid_add_base_bdev", 00:25:02.347 "req_id": 1 00:25:02.347 } 00:25:02.347 Got JSON-RPC error response 00:25:02.347 response: 00:25:02.347 { 00:25:02.347 "code": -22, 00:25:02.347 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:02.347 } 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:02.347 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.282 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.544 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.544 "name": "raid_bdev1", 00:25:03.544 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:25:03.544 "strip_size_kb": 0, 00:25:03.544 "state": "online", 00:25:03.544 "raid_level": "raid1", 00:25:03.544 "superblock": true, 00:25:03.544 "num_base_bdevs": 2, 00:25:03.544 "num_base_bdevs_discovered": 1, 00:25:03.544 "num_base_bdevs_operational": 1, 00:25:03.544 "base_bdevs_list": [ 00:25:03.544 { 00:25:03.544 "name": null, 00:25:03.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.544 "is_configured": false, 00:25:03.544 "data_offset": 0, 00:25:03.544 "data_size": 7936 00:25:03.544 }, 00:25:03.544 { 00:25:03.544 "name": "BaseBdev2", 00:25:03.544 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:25:03.544 "is_configured": true, 00:25:03.544 "data_offset": 256, 00:25:03.544 "data_size": 7936 00:25:03.544 } 00:25:03.544 ] 00:25:03.544 }' 00:25:03.544 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.544 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:03.803 "name": "raid_bdev1", 00:25:03.803 "uuid": "9feb694e-4a5c-4188-91a8-400f6fea9e5b", 00:25:03.803 "strip_size_kb": 0, 00:25:03.803 "state": "online", 00:25:03.803 "raid_level": "raid1", 00:25:03.803 "superblock": true, 00:25:03.803 "num_base_bdevs": 2, 00:25:03.803 "num_base_bdevs_discovered": 1, 00:25:03.803 "num_base_bdevs_operational": 1, 00:25:03.803 "base_bdevs_list": [ 00:25:03.803 { 00:25:03.803 "name": null, 00:25:03.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.803 "is_configured": false, 00:25:03.803 "data_offset": 0, 00:25:03.803 "data_size": 7936 00:25:03.803 }, 00:25:03.803 { 00:25:03.803 "name": "BaseBdev2", 00:25:03.803 "uuid": "77895156-049c-5c94-a13f-f66e97850264", 00:25:03.803 "is_configured": true, 00:25:03.803 "data_offset": 256, 00:25:03.803 "data_size": 7936 00:25:03.803 } 00:25:03.803 ] 00:25:03.803 }' 00:25:03.803 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87268 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87268 ']' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87268 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87268 00:25:04.062 killing process with pid 87268 00:25:04.062 Received shutdown signal, test time was about 60.000000 seconds 00:25:04.062 00:25:04.062 Latency(us) 00:25:04.062 [2024-12-06T13:19:10.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.062 [2024-12-06T13:19:10.591Z] =================================================================================================================== 00:25:04.062 [2024-12-06T13:19:10.591Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87268' 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87268 00:25:04.062 [2024-12-06 13:19:10.438516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.062 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87268 00:25:04.062 [2024-12-06 13:19:10.438691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.062 [2024-12-06 13:19:10.438763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.062 [2024-12-06 13:19:10.438784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:04.320 [2024-12-06 13:19:10.705384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.254 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:25:05.254 00:25:05.254 real 0m21.702s 00:25:05.254 user 0m29.334s 00:25:05.254 sys 0m2.583s 00:25:05.254 ************************************ 00:25:05.254 END TEST raid_rebuild_test_sb_4k 00:25:05.254 ************************************ 00:25:05.255 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.255 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:25:05.513 13:19:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:25:05.513 13:19:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:25:05.513 13:19:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:05.513 13:19:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.513 13:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.513 ************************************ 00:25:05.513 START TEST raid_state_function_test_sb_md_separate 00:25:05.513 ************************************ 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87971 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87971' 00:25:05.513 Process raid pid: 87971 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87971 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87971 ']' 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.513 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.514 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.514 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.514 13:19:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:05.514 [2024-12-06 13:19:11.942351] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:05.514 [2024-12-06 13:19:11.942721] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.772 [2024-12-06 13:19:12.127131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.772 [2024-12-06 13:19:12.261705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.031 [2024-12-06 13:19:12.474517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.031 [2024-12-06 13:19:12.474802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.596 [2024-12-06 13:19:12.903654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.596 [2024-12-06 13:19:12.903724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.596 [2024-12-06 13:19:12.903742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.596 [2024-12-06 13:19:12.903763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.596 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.597 "name": "Existed_Raid", 00:25:06.597 "uuid": "6cc46a66-8eec-455b-b57a-7a169a5a2254", 00:25:06.597 "strip_size_kb": 0, 00:25:06.597 "state": "configuring", 00:25:06.597 "raid_level": "raid1", 00:25:06.597 "superblock": true, 00:25:06.597 "num_base_bdevs": 2, 00:25:06.597 "num_base_bdevs_discovered": 0, 00:25:06.597 "num_base_bdevs_operational": 2, 00:25:06.597 "base_bdevs_list": [ 00:25:06.597 { 00:25:06.597 "name": "BaseBdev1", 00:25:06.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.597 "is_configured": false, 00:25:06.597 "data_offset": 0, 00:25:06.597 "data_size": 0 00:25:06.597 }, 00:25:06.597 { 00:25:06.597 "name": "BaseBdev2", 00:25:06.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.597 "is_configured": false, 00:25:06.597 "data_offset": 0, 00:25:06.597 "data_size": 0 00:25:06.597 } 00:25:06.597 ] 00:25:06.597 }' 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.597 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 [2024-12-06 13:19:13.423785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.162 [2024-12-06 13:19:13.423830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 [2024-12-06 13:19:13.431762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.162 [2024-12-06 13:19:13.431819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.162 [2024-12-06 13:19:13.431835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.162 [2024-12-06 13:19:13.431854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 [2024-12-06 13:19:13.478927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.162 BaseBdev1 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.162 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.162 [ 00:25:07.162 { 00:25:07.162 "name": "BaseBdev1", 00:25:07.162 "aliases": [ 00:25:07.162 "4bcff774-e8eb-4521-be94-dc1f14b6f566" 00:25:07.162 ], 00:25:07.162 "product_name": "Malloc disk", 00:25:07.162 "block_size": 4096, 00:25:07.162 "num_blocks": 8192, 00:25:07.162 "uuid": "4bcff774-e8eb-4521-be94-dc1f14b6f566", 00:25:07.162 "md_size": 32, 00:25:07.162 "md_interleave": false, 00:25:07.162 "dif_type": 0, 00:25:07.162 "assigned_rate_limits": { 00:25:07.162 "rw_ios_per_sec": 0, 00:25:07.162 "rw_mbytes_per_sec": 0, 00:25:07.162 "r_mbytes_per_sec": 0, 00:25:07.162 "w_mbytes_per_sec": 0 00:25:07.162 }, 00:25:07.162 "claimed": true, 00:25:07.162 "claim_type": "exclusive_write", 00:25:07.162 "zoned": false, 00:25:07.162 "supported_io_types": { 00:25:07.162 "read": true, 00:25:07.162 "write": true, 00:25:07.162 "unmap": true, 00:25:07.162 "flush": true, 00:25:07.162 "reset": true, 00:25:07.162 "nvme_admin": false, 00:25:07.162 "nvme_io": false, 00:25:07.162 "nvme_io_md": false, 00:25:07.162 "write_zeroes": true, 00:25:07.162 "zcopy": true, 00:25:07.162 "get_zone_info": false, 00:25:07.162 "zone_management": false, 00:25:07.162 "zone_append": false, 00:25:07.162 "compare": false, 00:25:07.162 "compare_and_write": false, 00:25:07.162 "abort": true, 00:25:07.162 "seek_hole": false, 00:25:07.162 "seek_data": false, 00:25:07.162 "copy": true, 00:25:07.163 "nvme_iov_md": false 00:25:07.163 }, 00:25:07.163 "memory_domains": [ 00:25:07.163 { 00:25:07.163 "dma_device_id": "system", 00:25:07.163 "dma_device_type": 1 00:25:07.163 }, 00:25:07.163 { 00:25:07.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.163 "dma_device_type": 2 00:25:07.163 } 00:25:07.163 ], 00:25:07.163 "driver_specific": {} 00:25:07.163 } 00:25:07.163 ] 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.163 "name": "Existed_Raid", 00:25:07.163 "uuid": "47ef04e5-4ef6-40b1-b0c6-6c63d4bd97b8", 00:25:07.163 "strip_size_kb": 0, 00:25:07.163 "state": "configuring", 00:25:07.163 "raid_level": "raid1", 00:25:07.163 "superblock": true, 00:25:07.163 "num_base_bdevs": 2, 00:25:07.163 "num_base_bdevs_discovered": 1, 00:25:07.163 "num_base_bdevs_operational": 2, 00:25:07.163 "base_bdevs_list": [ 00:25:07.163 { 00:25:07.163 "name": "BaseBdev1", 00:25:07.163 "uuid": "4bcff774-e8eb-4521-be94-dc1f14b6f566", 00:25:07.163 "is_configured": true, 00:25:07.163 "data_offset": 256, 00:25:07.163 "data_size": 7936 00:25:07.163 }, 00:25:07.163 { 00:25:07.163 "name": "BaseBdev2", 00:25:07.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.163 "is_configured": false, 00:25:07.163 "data_offset": 0, 00:25:07.163 "data_size": 0 00:25:07.163 } 00:25:07.163 ] 00:25:07.163 }' 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.163 13:19:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 [2024-12-06 13:19:14.027173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.730 [2024-12-06 13:19:14.027394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 [2024-12-06 13:19:14.035200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.730 [2024-12-06 13:19:14.037762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.730 [2024-12-06 13:19:14.037815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.730 "name": "Existed_Raid", 00:25:07.730 "uuid": "1f76e722-d0f5-4f4c-b42e-5a4615c84f46", 00:25:07.730 "strip_size_kb": 0, 00:25:07.730 "state": "configuring", 00:25:07.730 "raid_level": "raid1", 00:25:07.730 "superblock": true, 00:25:07.730 "num_base_bdevs": 2, 00:25:07.730 "num_base_bdevs_discovered": 1, 00:25:07.730 "num_base_bdevs_operational": 2, 00:25:07.730 "base_bdevs_list": [ 00:25:07.730 { 00:25:07.730 "name": "BaseBdev1", 00:25:07.730 "uuid": "4bcff774-e8eb-4521-be94-dc1f14b6f566", 00:25:07.730 "is_configured": true, 00:25:07.730 "data_offset": 256, 00:25:07.730 "data_size": 7936 00:25:07.730 }, 00:25:07.730 { 00:25:07.730 "name": "BaseBdev2", 00:25:07.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.730 "is_configured": false, 00:25:07.730 "data_offset": 0, 00:25:07.730 "data_size": 0 00:25:07.730 } 00:25:07.730 ] 00:25:07.730 }' 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.730 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.062 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:25:08.062 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.062 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.320 [2024-12-06 13:19:14.619468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.320 [2024-12-06 13:19:14.619979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:08.320 [2024-12-06 13:19:14.620009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:08.320 [2024-12-06 13:19:14.620116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:08.320 BaseBdev2 00:25:08.320 [2024-12-06 13:19:14.620292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:08.320 [2024-12-06 13:19:14.620312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:08.320 [2024-12-06 13:19:14.620431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.320 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.320 [ 00:25:08.320 { 00:25:08.320 "name": "BaseBdev2", 00:25:08.320 "aliases": [ 00:25:08.320 "3c02cebf-98e8-4b84-bd97-fd82d78ad3ba" 00:25:08.320 ], 00:25:08.320 "product_name": "Malloc disk", 00:25:08.320 "block_size": 4096, 00:25:08.320 "num_blocks": 8192, 00:25:08.320 "uuid": "3c02cebf-98e8-4b84-bd97-fd82d78ad3ba", 00:25:08.320 "md_size": 32, 00:25:08.320 "md_interleave": false, 00:25:08.320 "dif_type": 0, 00:25:08.320 "assigned_rate_limits": { 00:25:08.320 "rw_ios_per_sec": 0, 00:25:08.320 "rw_mbytes_per_sec": 0, 00:25:08.320 "r_mbytes_per_sec": 0, 00:25:08.320 "w_mbytes_per_sec": 0 00:25:08.320 }, 00:25:08.320 "claimed": true, 00:25:08.320 "claim_type": "exclusive_write", 00:25:08.320 "zoned": false, 00:25:08.320 "supported_io_types": { 00:25:08.320 "read": true, 00:25:08.320 "write": true, 00:25:08.320 "unmap": true, 00:25:08.320 "flush": true, 00:25:08.320 "reset": true, 00:25:08.320 "nvme_admin": false, 00:25:08.320 "nvme_io": false, 00:25:08.320 "nvme_io_md": false, 00:25:08.320 "write_zeroes": true, 00:25:08.320 "zcopy": true, 00:25:08.320 "get_zone_info": false, 00:25:08.320 "zone_management": false, 00:25:08.320 "zone_append": false, 00:25:08.320 "compare": false, 00:25:08.320 "compare_and_write": false, 00:25:08.320 "abort": true, 00:25:08.320 "seek_hole": false, 00:25:08.320 "seek_data": false, 00:25:08.320 "copy": true, 00:25:08.320 "nvme_iov_md": false 00:25:08.320 }, 00:25:08.320 "memory_domains": [ 00:25:08.320 { 00:25:08.320 "dma_device_id": "system", 00:25:08.320 "dma_device_type": 1 00:25:08.320 }, 00:25:08.320 { 00:25:08.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.320 "dma_device_type": 2 00:25:08.320 } 00:25:08.320 ], 00:25:08.320 "driver_specific": {} 00:25:08.320 } 00:25:08.320 ] 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.321 "name": "Existed_Raid", 00:25:08.321 "uuid": "1f76e722-d0f5-4f4c-b42e-5a4615c84f46", 00:25:08.321 "strip_size_kb": 0, 00:25:08.321 "state": "online", 00:25:08.321 "raid_level": "raid1", 00:25:08.321 "superblock": true, 00:25:08.321 "num_base_bdevs": 2, 00:25:08.321 "num_base_bdevs_discovered": 2, 00:25:08.321 "num_base_bdevs_operational": 2, 00:25:08.321 "base_bdevs_list": [ 00:25:08.321 { 00:25:08.321 "name": "BaseBdev1", 00:25:08.321 "uuid": "4bcff774-e8eb-4521-be94-dc1f14b6f566", 00:25:08.321 "is_configured": true, 00:25:08.321 "data_offset": 256, 00:25:08.321 "data_size": 7936 00:25:08.321 }, 00:25:08.321 { 00:25:08.321 "name": "BaseBdev2", 00:25:08.321 "uuid": "3c02cebf-98e8-4b84-bd97-fd82d78ad3ba", 00:25:08.321 "is_configured": true, 00:25:08.321 "data_offset": 256, 00:25:08.321 "data_size": 7936 00:25:08.321 } 00:25:08.321 ] 00:25:08.321 }' 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.321 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:08.886 [2024-12-06 13:19:15.172075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.886 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.886 "name": "Existed_Raid", 00:25:08.886 "aliases": [ 00:25:08.886 "1f76e722-d0f5-4f4c-b42e-5a4615c84f46" 00:25:08.886 ], 00:25:08.886 "product_name": "Raid Volume", 00:25:08.886 "block_size": 4096, 00:25:08.886 "num_blocks": 7936, 00:25:08.886 "uuid": "1f76e722-d0f5-4f4c-b42e-5a4615c84f46", 00:25:08.886 "md_size": 32, 00:25:08.886 "md_interleave": false, 00:25:08.886 "dif_type": 0, 00:25:08.886 "assigned_rate_limits": { 00:25:08.886 "rw_ios_per_sec": 0, 00:25:08.886 "rw_mbytes_per_sec": 0, 00:25:08.886 "r_mbytes_per_sec": 0, 00:25:08.886 "w_mbytes_per_sec": 0 00:25:08.886 }, 00:25:08.886 "claimed": false, 00:25:08.886 "zoned": false, 00:25:08.886 "supported_io_types": { 00:25:08.886 "read": true, 00:25:08.886 "write": true, 00:25:08.886 "unmap": false, 00:25:08.886 "flush": false, 00:25:08.886 "reset": true, 00:25:08.886 "nvme_admin": false, 00:25:08.886 "nvme_io": false, 00:25:08.886 "nvme_io_md": false, 00:25:08.886 "write_zeroes": true, 00:25:08.886 "zcopy": false, 00:25:08.886 "get_zone_info": false, 00:25:08.886 "zone_management": false, 00:25:08.886 "zone_append": false, 00:25:08.886 "compare": false, 00:25:08.886 "compare_and_write": false, 00:25:08.886 "abort": false, 00:25:08.886 "seek_hole": false, 00:25:08.886 "seek_data": false, 00:25:08.886 "copy": false, 00:25:08.886 "nvme_iov_md": false 00:25:08.886 }, 00:25:08.886 "memory_domains": [ 00:25:08.886 { 00:25:08.886 "dma_device_id": "system", 00:25:08.886 "dma_device_type": 1 00:25:08.886 }, 00:25:08.886 { 00:25:08.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.886 "dma_device_type": 2 00:25:08.886 }, 00:25:08.886 { 00:25:08.886 "dma_device_id": "system", 00:25:08.886 "dma_device_type": 1 00:25:08.886 }, 00:25:08.886 { 00:25:08.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.886 "dma_device_type": 2 00:25:08.886 } 00:25:08.886 ], 00:25:08.886 "driver_specific": { 00:25:08.886 "raid": { 00:25:08.887 "uuid": "1f76e722-d0f5-4f4c-b42e-5a4615c84f46", 00:25:08.887 "strip_size_kb": 0, 00:25:08.887 "state": "online", 00:25:08.887 "raid_level": "raid1", 00:25:08.887 "superblock": true, 00:25:08.887 "num_base_bdevs": 2, 00:25:08.887 "num_base_bdevs_discovered": 2, 00:25:08.887 "num_base_bdevs_operational": 2, 00:25:08.887 "base_bdevs_list": [ 00:25:08.887 { 00:25:08.887 "name": "BaseBdev1", 00:25:08.887 "uuid": "4bcff774-e8eb-4521-be94-dc1f14b6f566", 00:25:08.887 "is_configured": true, 00:25:08.887 "data_offset": 256, 00:25:08.887 "data_size": 7936 00:25:08.887 }, 00:25:08.887 { 00:25:08.887 "name": "BaseBdev2", 00:25:08.887 "uuid": "3c02cebf-98e8-4b84-bd97-fd82d78ad3ba", 00:25:08.887 "is_configured": true, 00:25:08.887 "data_offset": 256, 00:25:08.887 "data_size": 7936 00:25:08.887 } 00:25:08.887 ] 00:25:08.887 } 00:25:08.887 } 00:25:08.887 }' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:08.887 BaseBdev2' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:08.887 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.145 [2024-12-06 13:19:15.431811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.145 "name": "Existed_Raid", 00:25:09.145 "uuid": "1f76e722-d0f5-4f4c-b42e-5a4615c84f46", 00:25:09.145 "strip_size_kb": 0, 00:25:09.145 "state": "online", 00:25:09.145 "raid_level": "raid1", 00:25:09.145 "superblock": true, 00:25:09.145 "num_base_bdevs": 2, 00:25:09.145 "num_base_bdevs_discovered": 1, 00:25:09.145 "num_base_bdevs_operational": 1, 00:25:09.145 "base_bdevs_list": [ 00:25:09.145 { 00:25:09.145 "name": null, 00:25:09.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.145 "is_configured": false, 00:25:09.145 "data_offset": 0, 00:25:09.145 "data_size": 7936 00:25:09.145 }, 00:25:09.145 { 00:25:09.145 "name": "BaseBdev2", 00:25:09.145 "uuid": "3c02cebf-98e8-4b84-bd97-fd82d78ad3ba", 00:25:09.145 "is_configured": true, 00:25:09.145 "data_offset": 256, 00:25:09.145 "data_size": 7936 00:25:09.145 } 00:25:09.145 ] 00:25:09.145 }' 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.145 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 [2024-12-06 13:19:16.102044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:09.712 [2024-12-06 13:19:16.102187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:09.712 [2024-12-06 13:19:16.195978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.712 [2024-12-06 13:19:16.196194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.712 [2024-12-06 13:19:16.196353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87971 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87971 ']' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87971 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87971 00:25:09.971 killing process with pid 87971 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87971' 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87971 00:25:09.971 [2024-12-06 13:19:16.283873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.971 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87971 00:25:09.971 [2024-12-06 13:19:16.299033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:10.907 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:25:10.907 00:25:10.907 real 0m5.540s 00:25:10.907 user 0m8.248s 00:25:10.907 sys 0m0.847s 00:25:10.907 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.907 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:10.907 ************************************ 00:25:10.907 END TEST raid_state_function_test_sb_md_separate 00:25:10.907 ************************************ 00:25:10.907 13:19:17 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:25:10.907 13:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:10.907 13:19:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.907 13:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:10.907 ************************************ 00:25:10.907 START TEST raid_superblock_test_md_separate 00:25:10.907 ************************************ 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88229 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88229 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88229 ']' 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.907 13:19:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:11.165 [2024-12-06 13:19:17.517799] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:11.165 [2024-12-06 13:19:17.518135] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88229 ] 00:25:11.424 [2024-12-06 13:19:17.692622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.424 [2024-12-06 13:19:17.818234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.683 [2024-12-06 13:19:18.020140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.684 [2024-12-06 13:19:18.020181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.252 malloc1 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.252 [2024-12-06 13:19:18.570826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:12.252 [2024-12-06 13:19:18.571029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.252 [2024-12-06 13:19:18.571119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:12.252 [2024-12-06 13:19:18.571377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.252 [2024-12-06 13:19:18.574037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.252 [2024-12-06 13:19:18.574206] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:12.252 pt1 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.252 malloc2 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.252 [2024-12-06 13:19:18.630422] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:12.252 [2024-12-06 13:19:18.630505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.252 [2024-12-06 13:19:18.630540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:12.252 [2024-12-06 13:19:18.630556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.252 [2024-12-06 13:19:18.633238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.252 [2024-12-06 13:19:18.633283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:12.252 pt2 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.252 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.253 [2024-12-06 13:19:18.638432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:12.253 [2024-12-06 13:19:18.641168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:12.253 [2024-12-06 13:19:18.641596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:12.253 [2024-12-06 13:19:18.641729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:12.253 [2024-12-06 13:19:18.641873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:12.253 [2024-12-06 13:19:18.642279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:12.253 [2024-12-06 13:19:18.642428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:12.253 [2024-12-06 13:19:18.642765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.253 "name": "raid_bdev1", 00:25:12.253 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:12.253 "strip_size_kb": 0, 00:25:12.253 "state": "online", 00:25:12.253 "raid_level": "raid1", 00:25:12.253 "superblock": true, 00:25:12.253 "num_base_bdevs": 2, 00:25:12.253 "num_base_bdevs_discovered": 2, 00:25:12.253 "num_base_bdevs_operational": 2, 00:25:12.253 "base_bdevs_list": [ 00:25:12.253 { 00:25:12.253 "name": "pt1", 00:25:12.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.253 "is_configured": true, 00:25:12.253 "data_offset": 256, 00:25:12.253 "data_size": 7936 00:25:12.253 }, 00:25:12.253 { 00:25:12.253 "name": "pt2", 00:25:12.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.253 "is_configured": true, 00:25:12.253 "data_offset": 256, 00:25:12.253 "data_size": 7936 00:25:12.253 } 00:25:12.253 ] 00:25:12.253 }' 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.253 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.821 [2024-12-06 13:19:19.143295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.821 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:12.821 "name": "raid_bdev1", 00:25:12.821 "aliases": [ 00:25:12.821 "a4d4da4d-f005-4ce3-8778-1f49828c4eb1" 00:25:12.821 ], 00:25:12.821 "product_name": "Raid Volume", 00:25:12.821 "block_size": 4096, 00:25:12.821 "num_blocks": 7936, 00:25:12.821 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:12.821 "md_size": 32, 00:25:12.821 "md_interleave": false, 00:25:12.821 "dif_type": 0, 00:25:12.821 "assigned_rate_limits": { 00:25:12.821 "rw_ios_per_sec": 0, 00:25:12.821 "rw_mbytes_per_sec": 0, 00:25:12.821 "r_mbytes_per_sec": 0, 00:25:12.821 "w_mbytes_per_sec": 0 00:25:12.821 }, 00:25:12.821 "claimed": false, 00:25:12.821 "zoned": false, 00:25:12.821 "supported_io_types": { 00:25:12.821 "read": true, 00:25:12.821 "write": true, 00:25:12.821 "unmap": false, 00:25:12.821 "flush": false, 00:25:12.821 "reset": true, 00:25:12.821 "nvme_admin": false, 00:25:12.821 "nvme_io": false, 00:25:12.821 "nvme_io_md": false, 00:25:12.821 "write_zeroes": true, 00:25:12.821 "zcopy": false, 00:25:12.821 "get_zone_info": false, 00:25:12.821 "zone_management": false, 00:25:12.821 "zone_append": false, 00:25:12.821 "compare": false, 00:25:12.821 "compare_and_write": false, 00:25:12.821 "abort": false, 00:25:12.821 "seek_hole": false, 00:25:12.822 "seek_data": false, 00:25:12.822 "copy": false, 00:25:12.822 "nvme_iov_md": false 00:25:12.822 }, 00:25:12.822 "memory_domains": [ 00:25:12.822 { 00:25:12.822 "dma_device_id": "system", 00:25:12.822 "dma_device_type": 1 00:25:12.822 }, 00:25:12.822 { 00:25:12.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.822 "dma_device_type": 2 00:25:12.822 }, 00:25:12.822 { 00:25:12.822 "dma_device_id": "system", 00:25:12.822 "dma_device_type": 1 00:25:12.822 }, 00:25:12.822 { 00:25:12.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.822 "dma_device_type": 2 00:25:12.822 } 00:25:12.822 ], 00:25:12.822 "driver_specific": { 00:25:12.822 "raid": { 00:25:12.822 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:12.822 "strip_size_kb": 0, 00:25:12.822 "state": "online", 00:25:12.822 "raid_level": "raid1", 00:25:12.822 "superblock": true, 00:25:12.822 "num_base_bdevs": 2, 00:25:12.822 "num_base_bdevs_discovered": 2, 00:25:12.822 "num_base_bdevs_operational": 2, 00:25:12.822 "base_bdevs_list": [ 00:25:12.822 { 00:25:12.822 "name": "pt1", 00:25:12.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:12.822 "is_configured": true, 00:25:12.822 "data_offset": 256, 00:25:12.822 "data_size": 7936 00:25:12.822 }, 00:25:12.822 { 00:25:12.822 "name": "pt2", 00:25:12.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:12.822 "is_configured": true, 00:25:12.822 "data_offset": 256, 00:25:12.822 "data_size": 7936 00:25:12.822 } 00:25:12.822 ] 00:25:12.822 } 00:25:12.822 } 00:25:12.822 }' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:12.822 pt2' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:12.822 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:13.081 [2024-12-06 13:19:19.407258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a4d4da4d-f005-4ce3-8778-1f49828c4eb1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z a4d4da4d-f005-4ce3-8778-1f49828c4eb1 ']' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 [2024-12-06 13:19:19.458950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:13.081 [2024-12-06 13:19:19.458982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:13.081 [2024-12-06 13:19:19.459104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.081 [2024-12-06 13:19:19.459188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:13.081 [2024-12-06 13:19:19.459210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.081 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.081 [2024-12-06 13:19:19.603021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:13.081 [2024-12-06 13:19:19.605497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:13.081 [2024-12-06 13:19:19.605605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:13.081 [2024-12-06 13:19:19.605714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:13.081 [2024-12-06 13:19:19.605743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:13.081 [2024-12-06 13:19:19.605760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:13.340 request: 00:25:13.340 { 00:25:13.340 "name": "raid_bdev1", 00:25:13.340 "raid_level": "raid1", 00:25:13.340 "base_bdevs": [ 00:25:13.340 "malloc1", 00:25:13.340 "malloc2" 00:25:13.340 ], 00:25:13.340 "superblock": false, 00:25:13.340 "method": "bdev_raid_create", 00:25:13.340 "req_id": 1 00:25:13.340 } 00:25:13.340 Got JSON-RPC error response 00:25:13.340 response: 00:25:13.340 { 00:25:13.340 "code": -17, 00:25:13.340 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:13.340 } 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.340 [2024-12-06 13:19:19.666979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:13.340 [2024-12-06 13:19:19.667041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.340 [2024-12-06 13:19:19.667068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:13.340 [2024-12-06 13:19:19.667086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.340 [2024-12-06 13:19:19.669714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.340 [2024-12-06 13:19:19.669764] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:13.340 [2024-12-06 13:19:19.669824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:13.340 [2024-12-06 13:19:19.669897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:13.340 pt1 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.340 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.340 "name": "raid_bdev1", 00:25:13.340 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:13.340 "strip_size_kb": 0, 00:25:13.340 "state": "configuring", 00:25:13.340 "raid_level": "raid1", 00:25:13.340 "superblock": true, 00:25:13.340 "num_base_bdevs": 2, 00:25:13.340 "num_base_bdevs_discovered": 1, 00:25:13.340 "num_base_bdevs_operational": 2, 00:25:13.340 "base_bdevs_list": [ 00:25:13.340 { 00:25:13.340 "name": "pt1", 00:25:13.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.340 "is_configured": true, 00:25:13.340 "data_offset": 256, 00:25:13.340 "data_size": 7936 00:25:13.340 }, 00:25:13.340 { 00:25:13.340 "name": null, 00:25:13.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.341 "is_configured": false, 00:25:13.341 "data_offset": 256, 00:25:13.341 "data_size": 7936 00:25:13.341 } 00:25:13.341 ] 00:25:13.341 }' 00:25:13.341 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.341 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.907 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.908 [2024-12-06 13:19:20.195121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:13.908 [2024-12-06 13:19:20.195358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.908 [2024-12-06 13:19:20.195398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:13.908 [2024-12-06 13:19:20.195418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.908 [2024-12-06 13:19:20.195702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.908 [2024-12-06 13:19:20.195742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:13.908 [2024-12-06 13:19:20.195821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:13.908 [2024-12-06 13:19:20.195857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:13.908 [2024-12-06 13:19:20.195996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:13.908 [2024-12-06 13:19:20.196017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:13.908 [2024-12-06 13:19:20.196109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:13.908 [2024-12-06 13:19:20.196255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:13.908 [2024-12-06 13:19:20.196270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:13.908 [2024-12-06 13:19:20.196390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.908 pt2 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.908 "name": "raid_bdev1", 00:25:13.908 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:13.908 "strip_size_kb": 0, 00:25:13.908 "state": "online", 00:25:13.908 "raid_level": "raid1", 00:25:13.908 "superblock": true, 00:25:13.908 "num_base_bdevs": 2, 00:25:13.908 "num_base_bdevs_discovered": 2, 00:25:13.908 "num_base_bdevs_operational": 2, 00:25:13.908 "base_bdevs_list": [ 00:25:13.908 { 00:25:13.908 "name": "pt1", 00:25:13.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:13.908 "is_configured": true, 00:25:13.908 "data_offset": 256, 00:25:13.908 "data_size": 7936 00:25:13.908 }, 00:25:13.908 { 00:25:13.908 "name": "pt2", 00:25:13.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:13.908 "is_configured": true, 00:25:13.908 "data_offset": 256, 00:25:13.908 "data_size": 7936 00:25:13.908 } 00:25:13.908 ] 00:25:13.908 }' 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.908 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:14.474 [2024-12-06 13:19:20.707769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.474 "name": "raid_bdev1", 00:25:14.474 "aliases": [ 00:25:14.474 "a4d4da4d-f005-4ce3-8778-1f49828c4eb1" 00:25:14.474 ], 00:25:14.474 "product_name": "Raid Volume", 00:25:14.474 "block_size": 4096, 00:25:14.474 "num_blocks": 7936, 00:25:14.474 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:14.474 "md_size": 32, 00:25:14.474 "md_interleave": false, 00:25:14.474 "dif_type": 0, 00:25:14.474 "assigned_rate_limits": { 00:25:14.474 "rw_ios_per_sec": 0, 00:25:14.474 "rw_mbytes_per_sec": 0, 00:25:14.474 "r_mbytes_per_sec": 0, 00:25:14.474 "w_mbytes_per_sec": 0 00:25:14.474 }, 00:25:14.474 "claimed": false, 00:25:14.474 "zoned": false, 00:25:14.474 "supported_io_types": { 00:25:14.474 "read": true, 00:25:14.474 "write": true, 00:25:14.474 "unmap": false, 00:25:14.474 "flush": false, 00:25:14.474 "reset": true, 00:25:14.474 "nvme_admin": false, 00:25:14.474 "nvme_io": false, 00:25:14.474 "nvme_io_md": false, 00:25:14.474 "write_zeroes": true, 00:25:14.474 "zcopy": false, 00:25:14.474 "get_zone_info": false, 00:25:14.474 "zone_management": false, 00:25:14.474 "zone_append": false, 00:25:14.474 "compare": false, 00:25:14.474 "compare_and_write": false, 00:25:14.474 "abort": false, 00:25:14.474 "seek_hole": false, 00:25:14.474 "seek_data": false, 00:25:14.474 "copy": false, 00:25:14.474 "nvme_iov_md": false 00:25:14.474 }, 00:25:14.474 "memory_domains": [ 00:25:14.474 { 00:25:14.474 "dma_device_id": "system", 00:25:14.474 "dma_device_type": 1 00:25:14.474 }, 00:25:14.474 { 00:25:14.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.474 "dma_device_type": 2 00:25:14.474 }, 00:25:14.474 { 00:25:14.474 "dma_device_id": "system", 00:25:14.474 "dma_device_type": 1 00:25:14.474 }, 00:25:14.474 { 00:25:14.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.474 "dma_device_type": 2 00:25:14.474 } 00:25:14.474 ], 00:25:14.474 "driver_specific": { 00:25:14.474 "raid": { 00:25:14.474 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:14.474 "strip_size_kb": 0, 00:25:14.474 "state": "online", 00:25:14.474 "raid_level": "raid1", 00:25:14.474 "superblock": true, 00:25:14.474 "num_base_bdevs": 2, 00:25:14.474 "num_base_bdevs_discovered": 2, 00:25:14.474 "num_base_bdevs_operational": 2, 00:25:14.474 "base_bdevs_list": [ 00:25:14.474 { 00:25:14.474 "name": "pt1", 00:25:14.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:14.474 "is_configured": true, 00:25:14.474 "data_offset": 256, 00:25:14.474 "data_size": 7936 00:25:14.474 }, 00:25:14.474 { 00:25:14.474 "name": "pt2", 00:25:14.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:14.474 "is_configured": true, 00:25:14.474 "data_offset": 256, 00:25:14.474 "data_size": 7936 00:25:14.474 } 00:25:14.474 ] 00:25:14.474 } 00:25:14.474 } 00:25:14.474 }' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:14.474 pt2' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:14.474 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.475 [2024-12-06 13:19:20.975687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.475 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' a4d4da4d-f005-4ce3-8778-1f49828c4eb1 '!=' a4d4da4d-f005-4ce3-8778-1f49828c4eb1 ']' 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.733 [2024-12-06 13:19:21.019382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.733 "name": "raid_bdev1", 00:25:14.733 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:14.733 "strip_size_kb": 0, 00:25:14.733 "state": "online", 00:25:14.733 "raid_level": "raid1", 00:25:14.733 "superblock": true, 00:25:14.733 "num_base_bdevs": 2, 00:25:14.733 "num_base_bdevs_discovered": 1, 00:25:14.733 "num_base_bdevs_operational": 1, 00:25:14.733 "base_bdevs_list": [ 00:25:14.733 { 00:25:14.733 "name": null, 00:25:14.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.733 "is_configured": false, 00:25:14.733 "data_offset": 0, 00:25:14.733 "data_size": 7936 00:25:14.733 }, 00:25:14.733 { 00:25:14.733 "name": "pt2", 00:25:14.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:14.733 "is_configured": true, 00:25:14.733 "data_offset": 256, 00:25:14.733 "data_size": 7936 00:25:14.733 } 00:25:14.733 ] 00:25:14.733 }' 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.733 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:14.991 [2024-12-06 13:19:21.507536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:14.991 [2024-12-06 13:19:21.507571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.991 [2024-12-06 13:19:21.507671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.991 [2024-12-06 13:19:21.507751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.991 [2024-12-06 13:19:21.507772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.991 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.249 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.249 [2024-12-06 13:19:21.583557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:15.249 [2024-12-06 13:19:21.583781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.249 [2024-12-06 13:19:21.584011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:15.250 [2024-12-06 13:19:21.584151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.250 [2024-12-06 13:19:21.586941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.250 [2024-12-06 13:19:21.587108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:15.250 [2024-12-06 13:19:21.587285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:15.250 [2024-12-06 13:19:21.587491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.250 [2024-12-06 13:19:21.587632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:15.250 [2024-12-06 13:19:21.587656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:15.250 [2024-12-06 13:19:21.587750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:15.250 [2024-12-06 13:19:21.587914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:15.250 [2024-12-06 13:19:21.587930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:15.250 [2024-12-06 13:19:21.588188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.250 pt2 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.250 "name": "raid_bdev1", 00:25:15.250 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:15.250 "strip_size_kb": 0, 00:25:15.250 "state": "online", 00:25:15.250 "raid_level": "raid1", 00:25:15.250 "superblock": true, 00:25:15.250 "num_base_bdevs": 2, 00:25:15.250 "num_base_bdevs_discovered": 1, 00:25:15.250 "num_base_bdevs_operational": 1, 00:25:15.250 "base_bdevs_list": [ 00:25:15.250 { 00:25:15.250 "name": null, 00:25:15.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.250 "is_configured": false, 00:25:15.250 "data_offset": 256, 00:25:15.250 "data_size": 7936 00:25:15.250 }, 00:25:15.250 { 00:25:15.250 "name": "pt2", 00:25:15.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.250 "is_configured": true, 00:25:15.250 "data_offset": 256, 00:25:15.250 "data_size": 7936 00:25:15.250 } 00:25:15.250 ] 00:25:15.250 }' 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.250 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.815 [2024-12-06 13:19:22.115882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.815 [2024-12-06 13:19:22.115921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.815 [2024-12-06 13:19:22.116016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.815 [2024-12-06 13:19:22.116095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.815 [2024-12-06 13:19:22.116112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.815 [2024-12-06 13:19:22.175914] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:15.815 [2024-12-06 13:19:22.175986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.815 [2024-12-06 13:19:22.176021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:15.815 [2024-12-06 13:19:22.176037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.815 [2024-12-06 13:19:22.178775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.815 [2024-12-06 13:19:22.178984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:15.815 [2024-12-06 13:19:22.179088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:15.815 [2024-12-06 13:19:22.179149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:15.815 [2024-12-06 13:19:22.179318] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:15.815 [2024-12-06 13:19:22.179336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.815 [2024-12-06 13:19:22.179359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:15.815 [2024-12-06 13:19:22.179463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.815 [2024-12-06 13:19:22.179615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:15.815 [2024-12-06 13:19:22.179653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:15.815 [2024-12-06 13:19:22.179780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:15.815 [2024-12-06 13:19:22.179936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:15.815 [2024-12-06 13:19:22.179956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:15.815 [2024-12-06 13:19:22.180143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.815 pt1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:15.815 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.816 "name": "raid_bdev1", 00:25:15.816 "uuid": "a4d4da4d-f005-4ce3-8778-1f49828c4eb1", 00:25:15.816 "strip_size_kb": 0, 00:25:15.816 "state": "online", 00:25:15.816 "raid_level": "raid1", 00:25:15.816 "superblock": true, 00:25:15.816 "num_base_bdevs": 2, 00:25:15.816 "num_base_bdevs_discovered": 1, 00:25:15.816 "num_base_bdevs_operational": 1, 00:25:15.816 "base_bdevs_list": [ 00:25:15.816 { 00:25:15.816 "name": null, 00:25:15.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.816 "is_configured": false, 00:25:15.816 "data_offset": 256, 00:25:15.816 "data_size": 7936 00:25:15.816 }, 00:25:15.816 { 00:25:15.816 "name": "pt2", 00:25:15.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.816 "is_configured": true, 00:25:15.816 "data_offset": 256, 00:25:15.816 "data_size": 7936 00:25:15.816 } 00:25:15.816 ] 00:25:15.816 }' 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.816 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:16.381 [2024-12-06 13:19:22.768606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' a4d4da4d-f005-4ce3-8778-1f49828c4eb1 '!=' a4d4da4d-f005-4ce3-8778-1f49828c4eb1 ']' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88229 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88229 ']' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88229 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88229 00:25:16.381 killing process with pid 88229 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88229' 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88229 00:25:16.381 [2024-12-06 13:19:22.849994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:16.381 [2024-12-06 13:19:22.850094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.381 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88229 00:25:16.381 [2024-12-06 13:19:22.850162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.381 [2024-12-06 13:19:22.850190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:16.638 [2024-12-06 13:19:23.046819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:18.011 ************************************ 00:25:18.011 END TEST raid_superblock_test_md_separate 00:25:18.011 ************************************ 00:25:18.011 13:19:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:25:18.011 00:25:18.011 real 0m6.673s 00:25:18.011 user 0m10.555s 00:25:18.011 sys 0m0.961s 00:25:18.011 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.011 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.011 13:19:24 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:25:18.011 13:19:24 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:25:18.011 13:19:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:18.011 13:19:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:18.011 13:19:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:18.011 ************************************ 00:25:18.011 START TEST raid_rebuild_test_sb_md_separate 00:25:18.011 ************************************ 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:18.011 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88556 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88556 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88556 ']' 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.012 13:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.012 [2024-12-06 13:19:24.271089] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:18.012 [2024-12-06 13:19:24.271501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88556 ] 00:25:18.012 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:18.012 Zero copy mechanism will not be used. 00:25:18.012 [2024-12-06 13:19:24.453935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.270 [2024-12-06 13:19:24.581943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.270 [2024-12-06 13:19:24.790015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:18.270 [2024-12-06 13:19:24.790282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.836 BaseBdev1_malloc 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:18.836 [2024-12-06 13:19:25.331022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:18.836 [2024-12-06 13:19:25.331236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.836 [2024-12-06 13:19:25.331284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:18.836 [2024-12-06 13:19:25.331304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.836 [2024-12-06 13:19:25.333866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.836 [2024-12-06 13:19:25.333933] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:18.836 BaseBdev1 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.836 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 BaseBdev2_malloc 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 [2024-12-06 13:19:25.394256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:19.093 [2024-12-06 13:19:25.394345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.093 [2024-12-06 13:19:25.394376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.093 [2024-12-06 13:19:25.394393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.093 [2024-12-06 13:19:25.396971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.093 [2024-12-06 13:19:25.397143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:19.093 BaseBdev2 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 spare_malloc 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 spare_delay 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 [2024-12-06 13:19:25.469092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:19.093 [2024-12-06 13:19:25.469201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.093 [2024-12-06 13:19:25.469233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:19.093 [2024-12-06 13:19:25.469251] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.093 [2024-12-06 13:19:25.471833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.093 [2024-12-06 13:19:25.472008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:19.093 spare 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 [2024-12-06 13:19:25.481147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:19.093 [2024-12-06 13:19:25.483613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.093 [2024-12-06 13:19:25.483860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:19.093 [2024-12-06 13:19:25.483885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:19.093 [2024-12-06 13:19:25.483990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:19.093 [2024-12-06 13:19:25.484166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:19.093 [2024-12-06 13:19:25.484183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:19.093 [2024-12-06 13:19:25.484310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.093 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.093 "name": "raid_bdev1", 00:25:19.093 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:19.093 "strip_size_kb": 0, 00:25:19.093 "state": "online", 00:25:19.093 "raid_level": "raid1", 00:25:19.093 "superblock": true, 00:25:19.093 "num_base_bdevs": 2, 00:25:19.093 "num_base_bdevs_discovered": 2, 00:25:19.093 "num_base_bdevs_operational": 2, 00:25:19.094 "base_bdevs_list": [ 00:25:19.094 { 00:25:19.094 "name": "BaseBdev1", 00:25:19.094 "uuid": "2d499d88-84d4-5f39-a9bc-34ce8b927875", 00:25:19.094 "is_configured": true, 00:25:19.094 "data_offset": 256, 00:25:19.094 "data_size": 7936 00:25:19.094 }, 00:25:19.094 { 00:25:19.094 "name": "BaseBdev2", 00:25:19.094 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:19.094 "is_configured": true, 00:25:19.094 "data_offset": 256, 00:25:19.094 "data_size": 7936 00:25:19.094 } 00:25:19.094 ] 00:25:19.094 }' 00:25:19.094 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.094 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.660 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:19.660 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:19.660 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.660 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.660 [2024-12-06 13:19:25.977721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.660 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:19.660 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:19.918 [2024-12-06 13:19:26.313516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:19.918 /dev/nbd0 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:19.918 1+0 records in 00:25:19.918 1+0 records out 00:25:19.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438011 s, 9.4 MB/s 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.918 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:25:19.919 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:25:20.936 7936+0 records in 00:25:20.936 7936+0 records out 00:25:20.936 32505856 bytes (33 MB, 31 MiB) copied, 0.942585 s, 34.5 MB/s 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:20.936 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:21.194 [2024-12-06 13:19:27.584495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.194 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.194 [2024-12-06 13:19:27.596616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.195 "name": "raid_bdev1", 00:25:21.195 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:21.195 "strip_size_kb": 0, 00:25:21.195 "state": "online", 00:25:21.195 "raid_level": "raid1", 00:25:21.195 "superblock": true, 00:25:21.195 "num_base_bdevs": 2, 00:25:21.195 "num_base_bdevs_discovered": 1, 00:25:21.195 "num_base_bdevs_operational": 1, 00:25:21.195 "base_bdevs_list": [ 00:25:21.195 { 00:25:21.195 "name": null, 00:25:21.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.195 "is_configured": false, 00:25:21.195 "data_offset": 0, 00:25:21.195 "data_size": 7936 00:25:21.195 }, 00:25:21.195 { 00:25:21.195 "name": "BaseBdev2", 00:25:21.195 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:21.195 "is_configured": true, 00:25:21.195 "data_offset": 256, 00:25:21.195 "data_size": 7936 00:25:21.195 } 00:25:21.195 ] 00:25:21.195 }' 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.195 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.757 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:21.757 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.757 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:21.757 [2024-12-06 13:19:28.116795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:21.757 [2024-12-06 13:19:28.130719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:25:21.757 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.757 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:21.757 [2024-12-06 13:19:28.133283] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:22.687 "name": "raid_bdev1", 00:25:22.687 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:22.687 "strip_size_kb": 0, 00:25:22.687 "state": "online", 00:25:22.687 "raid_level": "raid1", 00:25:22.687 "superblock": true, 00:25:22.687 "num_base_bdevs": 2, 00:25:22.687 "num_base_bdevs_discovered": 2, 00:25:22.687 "num_base_bdevs_operational": 2, 00:25:22.687 "process": { 00:25:22.687 "type": "rebuild", 00:25:22.687 "target": "spare", 00:25:22.687 "progress": { 00:25:22.687 "blocks": 2560, 00:25:22.687 "percent": 32 00:25:22.687 } 00:25:22.687 }, 00:25:22.687 "base_bdevs_list": [ 00:25:22.687 { 00:25:22.687 "name": "spare", 00:25:22.687 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:22.687 "is_configured": true, 00:25:22.687 "data_offset": 256, 00:25:22.687 "data_size": 7936 00:25:22.687 }, 00:25:22.687 { 00:25:22.687 "name": "BaseBdev2", 00:25:22.687 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:22.687 "is_configured": true, 00:25:22.687 "data_offset": 256, 00:25:22.687 "data_size": 7936 00:25:22.687 } 00:25:22.687 ] 00:25:22.687 }' 00:25:22.687 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.944 [2024-12-06 13:19:29.299555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.944 [2024-12-06 13:19:29.342464] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:22.944 [2024-12-06 13:19:29.342813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.944 [2024-12-06 13:19:29.342855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.944 [2024-12-06 13:19:29.342878] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.944 "name": "raid_bdev1", 00:25:22.944 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:22.944 "strip_size_kb": 0, 00:25:22.944 "state": "online", 00:25:22.944 "raid_level": "raid1", 00:25:22.944 "superblock": true, 00:25:22.944 "num_base_bdevs": 2, 00:25:22.944 "num_base_bdevs_discovered": 1, 00:25:22.944 "num_base_bdevs_operational": 1, 00:25:22.944 "base_bdevs_list": [ 00:25:22.944 { 00:25:22.944 "name": null, 00:25:22.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.944 "is_configured": false, 00:25:22.944 "data_offset": 0, 00:25:22.944 "data_size": 7936 00:25:22.944 }, 00:25:22.944 { 00:25:22.944 "name": "BaseBdev2", 00:25:22.944 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:22.944 "is_configured": true, 00:25:22.944 "data_offset": 256, 00:25:22.944 "data_size": 7936 00:25:22.944 } 00:25:22.944 ] 00:25:22.944 }' 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.944 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.510 "name": "raid_bdev1", 00:25:23.510 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:23.510 "strip_size_kb": 0, 00:25:23.510 "state": "online", 00:25:23.510 "raid_level": "raid1", 00:25:23.510 "superblock": true, 00:25:23.510 "num_base_bdevs": 2, 00:25:23.510 "num_base_bdevs_discovered": 1, 00:25:23.510 "num_base_bdevs_operational": 1, 00:25:23.510 "base_bdevs_list": [ 00:25:23.510 { 00:25:23.510 "name": null, 00:25:23.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.510 "is_configured": false, 00:25:23.510 "data_offset": 0, 00:25:23.510 "data_size": 7936 00:25:23.510 }, 00:25:23.510 { 00:25:23.510 "name": "BaseBdev2", 00:25:23.510 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:23.510 "is_configured": true, 00:25:23.510 "data_offset": 256, 00:25:23.510 "data_size": 7936 00:25:23.510 } 00:25:23.510 ] 00:25:23.510 }' 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:23.510 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:23.769 [2024-12-06 13:19:30.049585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:23.769 [2024-12-06 13:19:30.062796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.769 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:23.769 [2024-12-06 13:19:30.065505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.706 "name": "raid_bdev1", 00:25:24.706 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:24.706 "strip_size_kb": 0, 00:25:24.706 "state": "online", 00:25:24.706 "raid_level": "raid1", 00:25:24.706 "superblock": true, 00:25:24.706 "num_base_bdevs": 2, 00:25:24.706 "num_base_bdevs_discovered": 2, 00:25:24.706 "num_base_bdevs_operational": 2, 00:25:24.706 "process": { 00:25:24.706 "type": "rebuild", 00:25:24.706 "target": "spare", 00:25:24.706 "progress": { 00:25:24.706 "blocks": 2560, 00:25:24.706 "percent": 32 00:25:24.706 } 00:25:24.706 }, 00:25:24.706 "base_bdevs_list": [ 00:25:24.706 { 00:25:24.706 "name": "spare", 00:25:24.706 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:24.706 "is_configured": true, 00:25:24.706 "data_offset": 256, 00:25:24.706 "data_size": 7936 00:25:24.706 }, 00:25:24.706 { 00:25:24.706 "name": "BaseBdev2", 00:25:24.706 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:24.706 "is_configured": true, 00:25:24.706 "data_offset": 256, 00:25:24.706 "data_size": 7936 00:25:24.706 } 00:25:24.706 ] 00:25:24.706 }' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:24.706 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=783 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.706 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:24.966 "name": "raid_bdev1", 00:25:24.966 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:24.966 "strip_size_kb": 0, 00:25:24.966 "state": "online", 00:25:24.966 "raid_level": "raid1", 00:25:24.966 "superblock": true, 00:25:24.966 "num_base_bdevs": 2, 00:25:24.966 "num_base_bdevs_discovered": 2, 00:25:24.966 "num_base_bdevs_operational": 2, 00:25:24.966 "process": { 00:25:24.966 "type": "rebuild", 00:25:24.966 "target": "spare", 00:25:24.966 "progress": { 00:25:24.966 "blocks": 2816, 00:25:24.966 "percent": 35 00:25:24.966 } 00:25:24.966 }, 00:25:24.966 "base_bdevs_list": [ 00:25:24.966 { 00:25:24.966 "name": "spare", 00:25:24.966 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:24.966 "is_configured": true, 00:25:24.966 "data_offset": 256, 00:25:24.966 "data_size": 7936 00:25:24.966 }, 00:25:24.966 { 00:25:24.966 "name": "BaseBdev2", 00:25:24.966 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:24.966 "is_configured": true, 00:25:24.966 "data_offset": 256, 00:25:24.966 "data_size": 7936 00:25:24.966 } 00:25:24.966 ] 00:25:24.966 }' 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.966 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:25.921 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.196 "name": "raid_bdev1", 00:25:26.196 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:26.196 "strip_size_kb": 0, 00:25:26.196 "state": "online", 00:25:26.196 "raid_level": "raid1", 00:25:26.196 "superblock": true, 00:25:26.196 "num_base_bdevs": 2, 00:25:26.196 "num_base_bdevs_discovered": 2, 00:25:26.196 "num_base_bdevs_operational": 2, 00:25:26.196 "process": { 00:25:26.196 "type": "rebuild", 00:25:26.196 "target": "spare", 00:25:26.196 "progress": { 00:25:26.196 "blocks": 5888, 00:25:26.196 "percent": 74 00:25:26.196 } 00:25:26.196 }, 00:25:26.196 "base_bdevs_list": [ 00:25:26.196 { 00:25:26.196 "name": "spare", 00:25:26.196 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:26.196 "is_configured": true, 00:25:26.196 "data_offset": 256, 00:25:26.196 "data_size": 7936 00:25:26.196 }, 00:25:26.196 { 00:25:26.196 "name": "BaseBdev2", 00:25:26.196 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:26.196 "is_configured": true, 00:25:26.196 "data_offset": 256, 00:25:26.196 "data_size": 7936 00:25:26.196 } 00:25:26.196 ] 00:25:26.196 }' 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.196 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:26.775 [2024-12-06 13:19:33.189875] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:26.775 [2024-12-06 13:19:33.189989] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:26.775 [2024-12-06 13:19:33.190165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.033 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:27.033 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.033 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.034 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:27.034 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:27.034 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.293 "name": "raid_bdev1", 00:25:27.293 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:27.293 "strip_size_kb": 0, 00:25:27.293 "state": "online", 00:25:27.293 "raid_level": "raid1", 00:25:27.293 "superblock": true, 00:25:27.293 "num_base_bdevs": 2, 00:25:27.293 "num_base_bdevs_discovered": 2, 00:25:27.293 "num_base_bdevs_operational": 2, 00:25:27.293 "base_bdevs_list": [ 00:25:27.293 { 00:25:27.293 "name": "spare", 00:25:27.293 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:27.293 "is_configured": true, 00:25:27.293 "data_offset": 256, 00:25:27.293 "data_size": 7936 00:25:27.293 }, 00:25:27.293 { 00:25:27.293 "name": "BaseBdev2", 00:25:27.293 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:27.293 "is_configured": true, 00:25:27.293 "data_offset": 256, 00:25:27.293 "data_size": 7936 00:25:27.293 } 00:25:27.293 ] 00:25:27.293 }' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.293 "name": "raid_bdev1", 00:25:27.293 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:27.293 "strip_size_kb": 0, 00:25:27.293 "state": "online", 00:25:27.293 "raid_level": "raid1", 00:25:27.293 "superblock": true, 00:25:27.293 "num_base_bdevs": 2, 00:25:27.293 "num_base_bdevs_discovered": 2, 00:25:27.293 "num_base_bdevs_operational": 2, 00:25:27.293 "base_bdevs_list": [ 00:25:27.293 { 00:25:27.293 "name": "spare", 00:25:27.293 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:27.293 "is_configured": true, 00:25:27.293 "data_offset": 256, 00:25:27.293 "data_size": 7936 00:25:27.293 }, 00:25:27.293 { 00:25:27.293 "name": "BaseBdev2", 00:25:27.293 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:27.293 "is_configured": true, 00:25:27.293 "data_offset": 256, 00:25:27.293 "data_size": 7936 00:25:27.293 } 00:25:27.293 ] 00:25:27.293 }' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:27.293 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.552 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.552 "name": "raid_bdev1", 00:25:27.552 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:27.552 "strip_size_kb": 0, 00:25:27.552 "state": "online", 00:25:27.552 "raid_level": "raid1", 00:25:27.552 "superblock": true, 00:25:27.552 "num_base_bdevs": 2, 00:25:27.552 "num_base_bdevs_discovered": 2, 00:25:27.552 "num_base_bdevs_operational": 2, 00:25:27.552 "base_bdevs_list": [ 00:25:27.552 { 00:25:27.552 "name": "spare", 00:25:27.552 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:27.552 "is_configured": true, 00:25:27.552 "data_offset": 256, 00:25:27.552 "data_size": 7936 00:25:27.552 }, 00:25:27.552 { 00:25:27.552 "name": "BaseBdev2", 00:25:27.552 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:27.552 "is_configured": true, 00:25:27.552 "data_offset": 256, 00:25:27.552 "data_size": 7936 00:25:27.552 } 00:25:27.552 ] 00:25:27.552 }' 00:25:27.553 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.553 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.119 [2024-12-06 13:19:34.365003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.119 [2024-12-06 13:19:34.365042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.119 [2024-12-06 13:19:34.365173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.119 [2024-12-06 13:19:34.365261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.119 [2024-12-06 13:19:34.365277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:28.119 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:28.378 /dev/nbd0 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.378 1+0 records in 00:25:28.378 1+0 records out 00:25:28.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034091 s, 12.0 MB/s 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:28.378 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:28.657 /dev/nbd1 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.657 1+0 records in 00:25:28.657 1+0 records out 00:25:28.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550499 s, 7.4 MB/s 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:28.657 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:28.917 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.175 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.434 [2024-12-06 13:19:35.893834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:29.434 [2024-12-06 13:19:35.893900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.434 [2024-12-06 13:19:35.893944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:29.434 [2024-12-06 13:19:35.893959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.434 [2024-12-06 13:19:35.896698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.434 [2024-12-06 13:19:35.896744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:29.434 [2024-12-06 13:19:35.896838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:29.434 [2024-12-06 13:19:35.896902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:29.434 [2024-12-06 13:19:35.897078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.434 spare 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.434 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.691 [2024-12-06 13:19:35.997201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:29.691 [2024-12-06 13:19:35.997386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:25:29.691 [2024-12-06 13:19:35.997568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:25:29.691 [2024-12-06 13:19:35.997933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:29.691 [2024-12-06 13:19:35.998071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:29.691 [2024-12-06 13:19:35.998467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.691 "name": "raid_bdev1", 00:25:29.691 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:29.691 "strip_size_kb": 0, 00:25:29.691 "state": "online", 00:25:29.691 "raid_level": "raid1", 00:25:29.691 "superblock": true, 00:25:29.691 "num_base_bdevs": 2, 00:25:29.691 "num_base_bdevs_discovered": 2, 00:25:29.691 "num_base_bdevs_operational": 2, 00:25:29.691 "base_bdevs_list": [ 00:25:29.691 { 00:25:29.691 "name": "spare", 00:25:29.691 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:29.691 "is_configured": true, 00:25:29.691 "data_offset": 256, 00:25:29.691 "data_size": 7936 00:25:29.691 }, 00:25:29.691 { 00:25:29.691 "name": "BaseBdev2", 00:25:29.691 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:29.691 "is_configured": true, 00:25:29.691 "data_offset": 256, 00:25:29.691 "data_size": 7936 00:25:29.691 } 00:25:29.691 ] 00:25:29.691 }' 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.691 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:30.257 "name": "raid_bdev1", 00:25:30.257 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:30.257 "strip_size_kb": 0, 00:25:30.257 "state": "online", 00:25:30.257 "raid_level": "raid1", 00:25:30.257 "superblock": true, 00:25:30.257 "num_base_bdevs": 2, 00:25:30.257 "num_base_bdevs_discovered": 2, 00:25:30.257 "num_base_bdevs_operational": 2, 00:25:30.257 "base_bdevs_list": [ 00:25:30.257 { 00:25:30.257 "name": "spare", 00:25:30.257 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:30.257 "is_configured": true, 00:25:30.257 "data_offset": 256, 00:25:30.257 "data_size": 7936 00:25:30.257 }, 00:25:30.257 { 00:25:30.257 "name": "BaseBdev2", 00:25:30.257 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:30.257 "is_configured": true, 00:25:30.257 "data_offset": 256, 00:25:30.257 "data_size": 7936 00:25:30.257 } 00:25:30.257 ] 00:25:30.257 }' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 [2024-12-06 13:19:36.686678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:30.257 "name": "raid_bdev1", 00:25:30.257 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:30.257 "strip_size_kb": 0, 00:25:30.257 "state": "online", 00:25:30.257 "raid_level": "raid1", 00:25:30.257 "superblock": true, 00:25:30.257 "num_base_bdevs": 2, 00:25:30.257 "num_base_bdevs_discovered": 1, 00:25:30.257 "num_base_bdevs_operational": 1, 00:25:30.257 "base_bdevs_list": [ 00:25:30.257 { 00:25:30.257 "name": null, 00:25:30.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.257 "is_configured": false, 00:25:30.257 "data_offset": 0, 00:25:30.257 "data_size": 7936 00:25:30.257 }, 00:25:30.257 { 00:25:30.257 "name": "BaseBdev2", 00:25:30.257 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:30.257 "is_configured": true, 00:25:30.257 "data_offset": 256, 00:25:30.257 "data_size": 7936 00:25:30.257 } 00:25:30.257 ] 00:25:30.257 }' 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:30.257 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.823 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:30.823 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.823 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:30.823 [2024-12-06 13:19:37.202877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.823 [2024-12-06 13:19:37.203285] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:30.823 [2024-12-06 13:19:37.203321] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:30.823 [2024-12-06 13:19:37.203402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.823 [2024-12-06 13:19:37.216817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:25:30.823 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.823 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:30.823 [2024-12-06 13:19:37.219591] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:31.757 "name": "raid_bdev1", 00:25:31.757 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:31.757 "strip_size_kb": 0, 00:25:31.757 "state": "online", 00:25:31.757 "raid_level": "raid1", 00:25:31.757 "superblock": true, 00:25:31.757 "num_base_bdevs": 2, 00:25:31.757 "num_base_bdevs_discovered": 2, 00:25:31.757 "num_base_bdevs_operational": 2, 00:25:31.757 "process": { 00:25:31.757 "type": "rebuild", 00:25:31.757 "target": "spare", 00:25:31.757 "progress": { 00:25:31.757 "blocks": 2560, 00:25:31.757 "percent": 32 00:25:31.757 } 00:25:31.757 }, 00:25:31.757 "base_bdevs_list": [ 00:25:31.757 { 00:25:31.757 "name": "spare", 00:25:31.757 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:31.757 "is_configured": true, 00:25:31.757 "data_offset": 256, 00:25:31.757 "data_size": 7936 00:25:31.757 }, 00:25:31.757 { 00:25:31.757 "name": "BaseBdev2", 00:25:31.757 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:31.757 "is_configured": true, 00:25:31.757 "data_offset": 256, 00:25:31.757 "data_size": 7936 00:25:31.757 } 00:25:31.757 ] 00:25:31.757 }' 00:25:31.757 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:32.015 [2024-12-06 13:19:38.385130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.015 [2024-12-06 13:19:38.429045] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:32.015 [2024-12-06 13:19:38.429126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.015 [2024-12-06 13:19:38.429150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.015 [2024-12-06 13:19:38.429176] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.015 "name": "raid_bdev1", 00:25:32.015 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:32.015 "strip_size_kb": 0, 00:25:32.015 "state": "online", 00:25:32.015 "raid_level": "raid1", 00:25:32.015 "superblock": true, 00:25:32.015 "num_base_bdevs": 2, 00:25:32.015 "num_base_bdevs_discovered": 1, 00:25:32.015 "num_base_bdevs_operational": 1, 00:25:32.015 "base_bdevs_list": [ 00:25:32.015 { 00:25:32.015 "name": null, 00:25:32.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.015 "is_configured": false, 00:25:32.015 "data_offset": 0, 00:25:32.015 "data_size": 7936 00:25:32.015 }, 00:25:32.015 { 00:25:32.015 "name": "BaseBdev2", 00:25:32.015 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:32.015 "is_configured": true, 00:25:32.015 "data_offset": 256, 00:25:32.015 "data_size": 7936 00:25:32.015 } 00:25:32.015 ] 00:25:32.015 }' 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.015 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:32.583 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:32.583 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.583 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:32.583 [2024-12-06 13:19:38.967973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:32.583 [2024-12-06 13:19:38.968058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.583 [2024-12-06 13:19:38.968103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:32.583 [2024-12-06 13:19:38.968120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.583 [2024-12-06 13:19:38.968487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.583 [2024-12-06 13:19:38.968521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:32.583 [2024-12-06 13:19:38.968605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:32.583 [2024-12-06 13:19:38.968634] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:32.583 [2024-12-06 13:19:38.968648] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:32.583 [2024-12-06 13:19:38.968689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:32.583 [2024-12-06 13:19:38.981833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:25:32.583 spare 00:25:32.583 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.583 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:32.583 [2024-12-06 13:19:38.984403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.519 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:33.519 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.519 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:33.519 "name": "raid_bdev1", 00:25:33.519 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:33.519 "strip_size_kb": 0, 00:25:33.519 "state": "online", 00:25:33.519 "raid_level": "raid1", 00:25:33.519 "superblock": true, 00:25:33.519 "num_base_bdevs": 2, 00:25:33.519 "num_base_bdevs_discovered": 2, 00:25:33.519 "num_base_bdevs_operational": 2, 00:25:33.519 "process": { 00:25:33.519 "type": "rebuild", 00:25:33.519 "target": "spare", 00:25:33.519 "progress": { 00:25:33.519 "blocks": 2560, 00:25:33.519 "percent": 32 00:25:33.519 } 00:25:33.519 }, 00:25:33.519 "base_bdevs_list": [ 00:25:33.519 { 00:25:33.519 "name": "spare", 00:25:33.519 "uuid": "4cfc8361-f944-5d0e-832a-913313ebc5b7", 00:25:33.519 "is_configured": true, 00:25:33.519 "data_offset": 256, 00:25:33.519 "data_size": 7936 00:25:33.519 }, 00:25:33.519 { 00:25:33.519 "name": "BaseBdev2", 00:25:33.519 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:33.519 "is_configured": true, 00:25:33.519 "data_offset": 256, 00:25:33.519 "data_size": 7936 00:25:33.519 } 00:25:33.519 ] 00:25:33.519 }' 00:25:33.519 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 [2024-12-06 13:19:40.154280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:33.777 [2024-12-06 13:19:40.193420] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:33.777 [2024-12-06 13:19:40.193547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.777 [2024-12-06 13:19:40.193578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:33.777 [2024-12-06 13:19:40.193590] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.777 "name": "raid_bdev1", 00:25:33.777 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:33.777 "strip_size_kb": 0, 00:25:33.777 "state": "online", 00:25:33.777 "raid_level": "raid1", 00:25:33.777 "superblock": true, 00:25:33.777 "num_base_bdevs": 2, 00:25:33.777 "num_base_bdevs_discovered": 1, 00:25:33.777 "num_base_bdevs_operational": 1, 00:25:33.777 "base_bdevs_list": [ 00:25:33.777 { 00:25:33.777 "name": null, 00:25:33.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.777 "is_configured": false, 00:25:33.777 "data_offset": 0, 00:25:33.777 "data_size": 7936 00:25:33.777 }, 00:25:33.777 { 00:25:33.777 "name": "BaseBdev2", 00:25:33.777 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:33.777 "is_configured": true, 00:25:33.777 "data_offset": 256, 00:25:33.777 "data_size": 7936 00:25:33.777 } 00:25:33.777 ] 00:25:33.777 }' 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.777 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:34.342 "name": "raid_bdev1", 00:25:34.342 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:34.342 "strip_size_kb": 0, 00:25:34.342 "state": "online", 00:25:34.342 "raid_level": "raid1", 00:25:34.342 "superblock": true, 00:25:34.342 "num_base_bdevs": 2, 00:25:34.342 "num_base_bdevs_discovered": 1, 00:25:34.342 "num_base_bdevs_operational": 1, 00:25:34.342 "base_bdevs_list": [ 00:25:34.342 { 00:25:34.342 "name": null, 00:25:34.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.342 "is_configured": false, 00:25:34.342 "data_offset": 0, 00:25:34.342 "data_size": 7936 00:25:34.342 }, 00:25:34.342 { 00:25:34.342 "name": "BaseBdev2", 00:25:34.342 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:34.342 "is_configured": true, 00:25:34.342 "data_offset": 256, 00:25:34.342 "data_size": 7936 00:25:34.342 } 00:25:34.342 ] 00:25:34.342 }' 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:34.342 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:34.600 [2024-12-06 13:19:40.900597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:34.600 [2024-12-06 13:19:40.900678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:34.600 [2024-12-06 13:19:40.900712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:34.600 [2024-12-06 13:19:40.900727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:34.600 [2024-12-06 13:19:40.901014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:34.600 [2024-12-06 13:19:40.901038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:34.600 [2024-12-06 13:19:40.901107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:34.600 [2024-12-06 13:19:40.901156] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:34.600 [2024-12-06 13:19:40.901187] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:34.600 [2024-12-06 13:19:40.901199] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:34.600 BaseBdev1 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.600 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:35.536 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.537 "name": "raid_bdev1", 00:25:35.537 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:35.537 "strip_size_kb": 0, 00:25:35.537 "state": "online", 00:25:35.537 "raid_level": "raid1", 00:25:35.537 "superblock": true, 00:25:35.537 "num_base_bdevs": 2, 00:25:35.537 "num_base_bdevs_discovered": 1, 00:25:35.537 "num_base_bdevs_operational": 1, 00:25:35.537 "base_bdevs_list": [ 00:25:35.537 { 00:25:35.537 "name": null, 00:25:35.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.537 "is_configured": false, 00:25:35.537 "data_offset": 0, 00:25:35.537 "data_size": 7936 00:25:35.537 }, 00:25:35.537 { 00:25:35.537 "name": "BaseBdev2", 00:25:35.537 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:35.537 "is_configured": true, 00:25:35.537 "data_offset": 256, 00:25:35.537 "data_size": 7936 00:25:35.537 } 00:25:35.537 ] 00:25:35.537 }' 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.537 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:36.103 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:36.104 "name": "raid_bdev1", 00:25:36.104 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:36.104 "strip_size_kb": 0, 00:25:36.104 "state": "online", 00:25:36.104 "raid_level": "raid1", 00:25:36.104 "superblock": true, 00:25:36.104 "num_base_bdevs": 2, 00:25:36.104 "num_base_bdevs_discovered": 1, 00:25:36.104 "num_base_bdevs_operational": 1, 00:25:36.104 "base_bdevs_list": [ 00:25:36.104 { 00:25:36.104 "name": null, 00:25:36.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.104 "is_configured": false, 00:25:36.104 "data_offset": 0, 00:25:36.104 "data_size": 7936 00:25:36.104 }, 00:25:36.104 { 00:25:36.104 "name": "BaseBdev2", 00:25:36.104 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:36.104 "is_configured": true, 00:25:36.104 "data_offset": 256, 00:25:36.104 "data_size": 7936 00:25:36.104 } 00:25:36.104 ] 00:25:36.104 }' 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:36.104 [2024-12-06 13:19:42.597272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.104 [2024-12-06 13:19:42.597453] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:36.104 [2024-12-06 13:19:42.597510] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:36.104 request: 00:25:36.104 { 00:25:36.104 "base_bdev": "BaseBdev1", 00:25:36.104 "raid_bdev": "raid_bdev1", 00:25:36.104 "method": "bdev_raid_add_base_bdev", 00:25:36.104 "req_id": 1 00:25:36.104 } 00:25:36.104 Got JSON-RPC error response 00:25:36.104 response: 00:25:36.104 { 00:25:36.104 "code": -22, 00:25:36.104 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:36.104 } 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:36.104 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.481 "name": "raid_bdev1", 00:25:37.481 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:37.481 "strip_size_kb": 0, 00:25:37.481 "state": "online", 00:25:37.481 "raid_level": "raid1", 00:25:37.481 "superblock": true, 00:25:37.481 "num_base_bdevs": 2, 00:25:37.481 "num_base_bdevs_discovered": 1, 00:25:37.481 "num_base_bdevs_operational": 1, 00:25:37.481 "base_bdevs_list": [ 00:25:37.481 { 00:25:37.481 "name": null, 00:25:37.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.481 "is_configured": false, 00:25:37.481 "data_offset": 0, 00:25:37.481 "data_size": 7936 00:25:37.481 }, 00:25:37.481 { 00:25:37.481 "name": "BaseBdev2", 00:25:37.481 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:37.481 "is_configured": true, 00:25:37.481 "data_offset": 256, 00:25:37.481 "data_size": 7936 00:25:37.481 } 00:25:37.481 ] 00:25:37.481 }' 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.481 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:37.740 "name": "raid_bdev1", 00:25:37.740 "uuid": "8b9be452-e414-444f-894e-095c5be8f3ef", 00:25:37.740 "strip_size_kb": 0, 00:25:37.740 "state": "online", 00:25:37.740 "raid_level": "raid1", 00:25:37.740 "superblock": true, 00:25:37.740 "num_base_bdevs": 2, 00:25:37.740 "num_base_bdevs_discovered": 1, 00:25:37.740 "num_base_bdevs_operational": 1, 00:25:37.740 "base_bdevs_list": [ 00:25:37.740 { 00:25:37.740 "name": null, 00:25:37.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.740 "is_configured": false, 00:25:37.740 "data_offset": 0, 00:25:37.740 "data_size": 7936 00:25:37.740 }, 00:25:37.740 { 00:25:37.740 "name": "BaseBdev2", 00:25:37.740 "uuid": "f8963939-f7c3-5592-a21f-f77740b6acd5", 00:25:37.740 "is_configured": true, 00:25:37.740 "data_offset": 256, 00:25:37.740 "data_size": 7936 00:25:37.740 } 00:25:37.740 ] 00:25:37.740 }' 00:25:37.740 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88556 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88556 ']' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88556 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88556 00:25:37.999 killing process with pid 88556 00:25:37.999 Received shutdown signal, test time was about 60.000000 seconds 00:25:37.999 00:25:37.999 Latency(us) 00:25:37.999 [2024-12-06T13:19:44.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.999 [2024-12-06T13:19:44.528Z] =================================================================================================================== 00:25:37.999 [2024-12-06T13:19:44.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88556' 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88556 00:25:37.999 [2024-12-06 13:19:44.355758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.999 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88556 00:25:37.999 [2024-12-06 13:19:44.355927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.999 [2024-12-06 13:19:44.355993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.999 [2024-12-06 13:19:44.356027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:38.259 [2024-12-06 13:19:44.646832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.192 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:39.192 00:25:39.192 real 0m21.545s 00:25:39.192 user 0m29.226s 00:25:39.192 sys 0m2.444s 00:25:39.192 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.192 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:39.192 ************************************ 00:25:39.192 END TEST raid_rebuild_test_sb_md_separate 00:25:39.192 ************************************ 00:25:39.450 13:19:45 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:39.450 13:19:45 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:39.450 13:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:39.450 13:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.450 13:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.450 ************************************ 00:25:39.450 START TEST raid_state_function_test_sb_md_interleaved 00:25:39.450 ************************************ 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:39.450 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89254 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:39.451 Process raid pid: 89254 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89254' 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89254 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89254 ']' 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.451 13:19:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:39.451 [2024-12-06 13:19:45.858645] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:39.451 [2024-12-06 13:19:45.858788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.736 [2024-12-06 13:19:46.033818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.736 [2024-12-06 13:19:46.171032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.021 [2024-12-06 13:19:46.388136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.021 [2024-12-06 13:19:46.388203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.595 [2024-12-06 13:19:46.874716] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.595 [2024-12-06 13:19:46.874782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.595 [2024-12-06 13:19:46.874799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.595 [2024-12-06 13:19:46.874816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.595 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.596 "name": "Existed_Raid", 00:25:40.596 "uuid": "52aa619b-b4c0-467e-878d-790bdb9d8299", 00:25:40.596 "strip_size_kb": 0, 00:25:40.596 "state": "configuring", 00:25:40.596 "raid_level": "raid1", 00:25:40.596 "superblock": true, 00:25:40.596 "num_base_bdevs": 2, 00:25:40.596 "num_base_bdevs_discovered": 0, 00:25:40.596 "num_base_bdevs_operational": 2, 00:25:40.596 "base_bdevs_list": [ 00:25:40.596 { 00:25:40.596 "name": "BaseBdev1", 00:25:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.596 "is_configured": false, 00:25:40.596 "data_offset": 0, 00:25:40.596 "data_size": 0 00:25:40.596 }, 00:25:40.596 { 00:25:40.596 "name": "BaseBdev2", 00:25:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.596 "is_configured": false, 00:25:40.596 "data_offset": 0, 00:25:40.596 "data_size": 0 00:25:40.596 } 00:25:40.596 ] 00:25:40.596 }' 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.596 13:19:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.854 [2024-12-06 13:19:47.362811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:40.854 [2024-12-06 13:19:47.362858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:40.854 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.855 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:40.855 [2024-12-06 13:19:47.370793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.855 [2024-12-06 13:19:47.370846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.855 [2024-12-06 13:19:47.370862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.855 [2024-12-06 13:19:47.370881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.855 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.855 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:40.855 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.855 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.112 [2024-12-06 13:19:47.416300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.112 BaseBdev1 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:41.112 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.113 [ 00:25:41.113 { 00:25:41.113 "name": "BaseBdev1", 00:25:41.113 "aliases": [ 00:25:41.113 "5183cf5d-9b43-419a-a334-90a7f3284383" 00:25:41.113 ], 00:25:41.113 "product_name": "Malloc disk", 00:25:41.113 "block_size": 4128, 00:25:41.113 "num_blocks": 8192, 00:25:41.113 "uuid": "5183cf5d-9b43-419a-a334-90a7f3284383", 00:25:41.113 "md_size": 32, 00:25:41.113 "md_interleave": true, 00:25:41.113 "dif_type": 0, 00:25:41.113 "assigned_rate_limits": { 00:25:41.113 "rw_ios_per_sec": 0, 00:25:41.113 "rw_mbytes_per_sec": 0, 00:25:41.113 "r_mbytes_per_sec": 0, 00:25:41.113 "w_mbytes_per_sec": 0 00:25:41.113 }, 00:25:41.113 "claimed": true, 00:25:41.113 "claim_type": "exclusive_write", 00:25:41.113 "zoned": false, 00:25:41.113 "supported_io_types": { 00:25:41.113 "read": true, 00:25:41.113 "write": true, 00:25:41.113 "unmap": true, 00:25:41.113 "flush": true, 00:25:41.113 "reset": true, 00:25:41.113 "nvme_admin": false, 00:25:41.113 "nvme_io": false, 00:25:41.113 "nvme_io_md": false, 00:25:41.113 "write_zeroes": true, 00:25:41.113 "zcopy": true, 00:25:41.113 "get_zone_info": false, 00:25:41.113 "zone_management": false, 00:25:41.113 "zone_append": false, 00:25:41.113 "compare": false, 00:25:41.113 "compare_and_write": false, 00:25:41.113 "abort": true, 00:25:41.113 "seek_hole": false, 00:25:41.113 "seek_data": false, 00:25:41.113 "copy": true, 00:25:41.113 "nvme_iov_md": false 00:25:41.113 }, 00:25:41.113 "memory_domains": [ 00:25:41.113 { 00:25:41.113 "dma_device_id": "system", 00:25:41.113 "dma_device_type": 1 00:25:41.113 }, 00:25:41.113 { 00:25:41.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.113 "dma_device_type": 2 00:25:41.113 } 00:25:41.113 ], 00:25:41.113 "driver_specific": {} 00:25:41.113 } 00:25:41.113 ] 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.113 "name": "Existed_Raid", 00:25:41.113 "uuid": "4c2b9658-614f-4bc0-8adb-8a32f3a7b4f0", 00:25:41.113 "strip_size_kb": 0, 00:25:41.113 "state": "configuring", 00:25:41.113 "raid_level": "raid1", 00:25:41.113 "superblock": true, 00:25:41.113 "num_base_bdevs": 2, 00:25:41.113 "num_base_bdevs_discovered": 1, 00:25:41.113 "num_base_bdevs_operational": 2, 00:25:41.113 "base_bdevs_list": [ 00:25:41.113 { 00:25:41.113 "name": "BaseBdev1", 00:25:41.113 "uuid": "5183cf5d-9b43-419a-a334-90a7f3284383", 00:25:41.113 "is_configured": true, 00:25:41.113 "data_offset": 256, 00:25:41.113 "data_size": 7936 00:25:41.113 }, 00:25:41.113 { 00:25:41.113 "name": "BaseBdev2", 00:25:41.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.113 "is_configured": false, 00:25:41.113 "data_offset": 0, 00:25:41.113 "data_size": 0 00:25:41.113 } 00:25:41.113 ] 00:25:41.113 }' 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.113 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 [2024-12-06 13:19:47.948549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.680 [2024-12-06 13:19:47.948612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 [2024-12-06 13:19:47.956586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.680 [2024-12-06 13:19:47.959047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:41.680 [2024-12-06 13:19:47.959103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:41.680 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.680 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.680 "name": "Existed_Raid", 00:25:41.680 "uuid": "c87846de-29fd-456a-9ce8-61425b528736", 00:25:41.680 "strip_size_kb": 0, 00:25:41.680 "state": "configuring", 00:25:41.680 "raid_level": "raid1", 00:25:41.680 "superblock": true, 00:25:41.680 "num_base_bdevs": 2, 00:25:41.680 "num_base_bdevs_discovered": 1, 00:25:41.680 "num_base_bdevs_operational": 2, 00:25:41.680 "base_bdevs_list": [ 00:25:41.681 { 00:25:41.681 "name": "BaseBdev1", 00:25:41.681 "uuid": "5183cf5d-9b43-419a-a334-90a7f3284383", 00:25:41.681 "is_configured": true, 00:25:41.681 "data_offset": 256, 00:25:41.681 "data_size": 7936 00:25:41.681 }, 00:25:41.681 { 00:25:41.681 "name": "BaseBdev2", 00:25:41.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.681 "is_configured": false, 00:25:41.681 "data_offset": 0, 00:25:41.681 "data_size": 0 00:25:41.681 } 00:25:41.681 ] 00:25:41.681 }' 00:25:41.681 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.681 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.246 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:42.246 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.246 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.247 [2024-12-06 13:19:48.508779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:42.247 [2024-12-06 13:19:48.509065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:42.247 [2024-12-06 13:19:48.509086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:42.247 [2024-12-06 13:19:48.509196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:42.247 [2024-12-06 13:19:48.509294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:42.247 [2024-12-06 13:19:48.509323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:42.247 BaseBdev2 00:25:42.247 [2024-12-06 13:19:48.509421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.247 [ 00:25:42.247 { 00:25:42.247 "name": "BaseBdev2", 00:25:42.247 "aliases": [ 00:25:42.247 "51b5b16f-f325-4569-afc0-6364ce9b94b9" 00:25:42.247 ], 00:25:42.247 "product_name": "Malloc disk", 00:25:42.247 "block_size": 4128, 00:25:42.247 "num_blocks": 8192, 00:25:42.247 "uuid": "51b5b16f-f325-4569-afc0-6364ce9b94b9", 00:25:42.247 "md_size": 32, 00:25:42.247 "md_interleave": true, 00:25:42.247 "dif_type": 0, 00:25:42.247 "assigned_rate_limits": { 00:25:42.247 "rw_ios_per_sec": 0, 00:25:42.247 "rw_mbytes_per_sec": 0, 00:25:42.247 "r_mbytes_per_sec": 0, 00:25:42.247 "w_mbytes_per_sec": 0 00:25:42.247 }, 00:25:42.247 "claimed": true, 00:25:42.247 "claim_type": "exclusive_write", 00:25:42.247 "zoned": false, 00:25:42.247 "supported_io_types": { 00:25:42.247 "read": true, 00:25:42.247 "write": true, 00:25:42.247 "unmap": true, 00:25:42.247 "flush": true, 00:25:42.247 "reset": true, 00:25:42.247 "nvme_admin": false, 00:25:42.247 "nvme_io": false, 00:25:42.247 "nvme_io_md": false, 00:25:42.247 "write_zeroes": true, 00:25:42.247 "zcopy": true, 00:25:42.247 "get_zone_info": false, 00:25:42.247 "zone_management": false, 00:25:42.247 "zone_append": false, 00:25:42.247 "compare": false, 00:25:42.247 "compare_and_write": false, 00:25:42.247 "abort": true, 00:25:42.247 "seek_hole": false, 00:25:42.247 "seek_data": false, 00:25:42.247 "copy": true, 00:25:42.247 "nvme_iov_md": false 00:25:42.247 }, 00:25:42.247 "memory_domains": [ 00:25:42.247 { 00:25:42.247 "dma_device_id": "system", 00:25:42.247 "dma_device_type": 1 00:25:42.247 }, 00:25:42.247 { 00:25:42.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.247 "dma_device_type": 2 00:25:42.247 } 00:25:42.247 ], 00:25:42.247 "driver_specific": {} 00:25:42.247 } 00:25:42.247 ] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.247 "name": "Existed_Raid", 00:25:42.247 "uuid": "c87846de-29fd-456a-9ce8-61425b528736", 00:25:42.247 "strip_size_kb": 0, 00:25:42.247 "state": "online", 00:25:42.247 "raid_level": "raid1", 00:25:42.247 "superblock": true, 00:25:42.247 "num_base_bdevs": 2, 00:25:42.247 "num_base_bdevs_discovered": 2, 00:25:42.247 "num_base_bdevs_operational": 2, 00:25:42.247 "base_bdevs_list": [ 00:25:42.247 { 00:25:42.247 "name": "BaseBdev1", 00:25:42.247 "uuid": "5183cf5d-9b43-419a-a334-90a7f3284383", 00:25:42.247 "is_configured": true, 00:25:42.247 "data_offset": 256, 00:25:42.247 "data_size": 7936 00:25:42.247 }, 00:25:42.247 { 00:25:42.247 "name": "BaseBdev2", 00:25:42.247 "uuid": "51b5b16f-f325-4569-afc0-6364ce9b94b9", 00:25:42.247 "is_configured": true, 00:25:42.247 "data_offset": 256, 00:25:42.247 "data_size": 7936 00:25:42.247 } 00:25:42.247 ] 00:25:42.247 }' 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.247 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:42.816 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 [2024-12-06 13:19:49.053361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.817 "name": "Existed_Raid", 00:25:42.817 "aliases": [ 00:25:42.817 "c87846de-29fd-456a-9ce8-61425b528736" 00:25:42.817 ], 00:25:42.817 "product_name": "Raid Volume", 00:25:42.817 "block_size": 4128, 00:25:42.817 "num_blocks": 7936, 00:25:42.817 "uuid": "c87846de-29fd-456a-9ce8-61425b528736", 00:25:42.817 "md_size": 32, 00:25:42.817 "md_interleave": true, 00:25:42.817 "dif_type": 0, 00:25:42.817 "assigned_rate_limits": { 00:25:42.817 "rw_ios_per_sec": 0, 00:25:42.817 "rw_mbytes_per_sec": 0, 00:25:42.817 "r_mbytes_per_sec": 0, 00:25:42.817 "w_mbytes_per_sec": 0 00:25:42.817 }, 00:25:42.817 "claimed": false, 00:25:42.817 "zoned": false, 00:25:42.817 "supported_io_types": { 00:25:42.817 "read": true, 00:25:42.817 "write": true, 00:25:42.817 "unmap": false, 00:25:42.817 "flush": false, 00:25:42.817 "reset": true, 00:25:42.817 "nvme_admin": false, 00:25:42.817 "nvme_io": false, 00:25:42.817 "nvme_io_md": false, 00:25:42.817 "write_zeroes": true, 00:25:42.817 "zcopy": false, 00:25:42.817 "get_zone_info": false, 00:25:42.817 "zone_management": false, 00:25:42.817 "zone_append": false, 00:25:42.817 "compare": false, 00:25:42.817 "compare_and_write": false, 00:25:42.817 "abort": false, 00:25:42.817 "seek_hole": false, 00:25:42.817 "seek_data": false, 00:25:42.817 "copy": false, 00:25:42.817 "nvme_iov_md": false 00:25:42.817 }, 00:25:42.817 "memory_domains": [ 00:25:42.817 { 00:25:42.817 "dma_device_id": "system", 00:25:42.817 "dma_device_type": 1 00:25:42.817 }, 00:25:42.817 { 00:25:42.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.817 "dma_device_type": 2 00:25:42.817 }, 00:25:42.817 { 00:25:42.817 "dma_device_id": "system", 00:25:42.817 "dma_device_type": 1 00:25:42.817 }, 00:25:42.817 { 00:25:42.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.817 "dma_device_type": 2 00:25:42.817 } 00:25:42.817 ], 00:25:42.817 "driver_specific": { 00:25:42.817 "raid": { 00:25:42.817 "uuid": "c87846de-29fd-456a-9ce8-61425b528736", 00:25:42.817 "strip_size_kb": 0, 00:25:42.817 "state": "online", 00:25:42.817 "raid_level": "raid1", 00:25:42.817 "superblock": true, 00:25:42.817 "num_base_bdevs": 2, 00:25:42.817 "num_base_bdevs_discovered": 2, 00:25:42.817 "num_base_bdevs_operational": 2, 00:25:42.817 "base_bdevs_list": [ 00:25:42.817 { 00:25:42.817 "name": "BaseBdev1", 00:25:42.817 "uuid": "5183cf5d-9b43-419a-a334-90a7f3284383", 00:25:42.817 "is_configured": true, 00:25:42.817 "data_offset": 256, 00:25:42.817 "data_size": 7936 00:25:42.817 }, 00:25:42.817 { 00:25:42.817 "name": "BaseBdev2", 00:25:42.817 "uuid": "51b5b16f-f325-4569-afc0-6364ce9b94b9", 00:25:42.817 "is_configured": true, 00:25:42.817 "data_offset": 256, 00:25:42.817 "data_size": 7936 00:25:42.817 } 00:25:42.817 ] 00:25:42.817 } 00:25:42.817 } 00:25:42.817 }' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:42.817 BaseBdev2' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.817 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 [2024-12-06 13:19:49.305096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.075 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:43.076 "name": "Existed_Raid", 00:25:43.076 "uuid": "c87846de-29fd-456a-9ce8-61425b528736", 00:25:43.076 "strip_size_kb": 0, 00:25:43.076 "state": "online", 00:25:43.076 "raid_level": "raid1", 00:25:43.076 "superblock": true, 00:25:43.076 "num_base_bdevs": 2, 00:25:43.076 "num_base_bdevs_discovered": 1, 00:25:43.076 "num_base_bdevs_operational": 1, 00:25:43.076 "base_bdevs_list": [ 00:25:43.076 { 00:25:43.076 "name": null, 00:25:43.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.076 "is_configured": false, 00:25:43.076 "data_offset": 0, 00:25:43.076 "data_size": 7936 00:25:43.076 }, 00:25:43.076 { 00:25:43.076 "name": "BaseBdev2", 00:25:43.076 "uuid": "51b5b16f-f325-4569-afc0-6364ce9b94b9", 00:25:43.076 "is_configured": true, 00:25:43.076 "data_offset": 256, 00:25:43.076 "data_size": 7936 00:25:43.076 } 00:25:43.076 ] 00:25:43.076 }' 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:43.076 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 [2024-12-06 13:19:49.984875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:43.641 [2024-12-06 13:19:49.985022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.641 [2024-12-06 13:19:50.077892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.641 [2024-12-06 13:19:50.077968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.641 [2024-12-06 13:19:50.077990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89254 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89254 ']' 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89254 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:43.641 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.642 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89254 00:25:43.899 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.899 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.899 killing process with pid 89254 00:25:43.899 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89254' 00:25:43.899 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89254 00:25:43.899 [2024-12-06 13:19:50.171703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:43.899 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89254 00:25:43.899 [2024-12-06 13:19:50.186674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:44.851 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:44.851 00:25:44.851 real 0m5.497s 00:25:44.851 user 0m8.248s 00:25:44.851 sys 0m0.821s 00:25:44.851 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.851 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:44.851 ************************************ 00:25:44.851 END TEST raid_state_function_test_sb_md_interleaved 00:25:44.851 ************************************ 00:25:44.851 13:19:51 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:44.851 13:19:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:44.851 13:19:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.851 13:19:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:44.851 ************************************ 00:25:44.851 START TEST raid_superblock_test_md_interleaved 00:25:44.851 ************************************ 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89512 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89512 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89512 ']' 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.851 13:19:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:45.111 [2024-12-06 13:19:51.462679] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:45.111 [2024-12-06 13:19:51.462862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89512 ] 00:25:45.370 [2024-12-06 13:19:51.654704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.370 [2024-12-06 13:19:51.815066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.627 [2024-12-06 13:19:52.022528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.627 [2024-12-06 13:19:52.022579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.195 malloc1 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.195 [2024-12-06 13:19:52.475205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:46.195 [2024-12-06 13:19:52.475300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.195 [2024-12-06 13:19:52.475334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:46.195 [2024-12-06 13:19:52.475351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.195 [2024-12-06 13:19:52.478030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.195 [2024-12-06 13:19:52.478091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:46.195 pt1 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.195 malloc2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.195 [2024-12-06 13:19:52.528535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:46.195 [2024-12-06 13:19:52.528613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.195 [2024-12-06 13:19:52.528646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:46.195 [2024-12-06 13:19:52.528664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.195 [2024-12-06 13:19:52.531215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.195 [2024-12-06 13:19:52.531290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:46.195 pt2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.195 [2024-12-06 13:19:52.536587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:46.195 [2024-12-06 13:19:52.539144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:46.195 [2024-12-06 13:19:52.539409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:46.195 [2024-12-06 13:19:52.539429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:46.195 [2024-12-06 13:19:52.539542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:46.195 [2024-12-06 13:19:52.539649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:46.195 [2024-12-06 13:19:52.539670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:46.195 [2024-12-06 13:19:52.539771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:46.195 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.196 "name": "raid_bdev1", 00:25:46.196 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:46.196 "strip_size_kb": 0, 00:25:46.196 "state": "online", 00:25:46.196 "raid_level": "raid1", 00:25:46.196 "superblock": true, 00:25:46.196 "num_base_bdevs": 2, 00:25:46.196 "num_base_bdevs_discovered": 2, 00:25:46.196 "num_base_bdevs_operational": 2, 00:25:46.196 "base_bdevs_list": [ 00:25:46.196 { 00:25:46.196 "name": "pt1", 00:25:46.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:46.196 "is_configured": true, 00:25:46.196 "data_offset": 256, 00:25:46.196 "data_size": 7936 00:25:46.196 }, 00:25:46.196 { 00:25:46.196 "name": "pt2", 00:25:46.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.196 "is_configured": true, 00:25:46.196 "data_offset": 256, 00:25:46.196 "data_size": 7936 00:25:46.196 } 00:25:46.196 ] 00:25:46.196 }' 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.196 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.762 [2024-12-06 13:19:53.037106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.762 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:46.762 "name": "raid_bdev1", 00:25:46.762 "aliases": [ 00:25:46.762 "a45857d2-921e-4420-8399-c51389fd21f0" 00:25:46.762 ], 00:25:46.762 "product_name": "Raid Volume", 00:25:46.762 "block_size": 4128, 00:25:46.762 "num_blocks": 7936, 00:25:46.762 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:46.762 "md_size": 32, 00:25:46.762 "md_interleave": true, 00:25:46.762 "dif_type": 0, 00:25:46.762 "assigned_rate_limits": { 00:25:46.762 "rw_ios_per_sec": 0, 00:25:46.762 "rw_mbytes_per_sec": 0, 00:25:46.762 "r_mbytes_per_sec": 0, 00:25:46.762 "w_mbytes_per_sec": 0 00:25:46.762 }, 00:25:46.762 "claimed": false, 00:25:46.762 "zoned": false, 00:25:46.762 "supported_io_types": { 00:25:46.763 "read": true, 00:25:46.763 "write": true, 00:25:46.763 "unmap": false, 00:25:46.763 "flush": false, 00:25:46.763 "reset": true, 00:25:46.763 "nvme_admin": false, 00:25:46.763 "nvme_io": false, 00:25:46.763 "nvme_io_md": false, 00:25:46.763 "write_zeroes": true, 00:25:46.763 "zcopy": false, 00:25:46.763 "get_zone_info": false, 00:25:46.763 "zone_management": false, 00:25:46.763 "zone_append": false, 00:25:46.763 "compare": false, 00:25:46.763 "compare_and_write": false, 00:25:46.763 "abort": false, 00:25:46.763 "seek_hole": false, 00:25:46.763 "seek_data": false, 00:25:46.763 "copy": false, 00:25:46.763 "nvme_iov_md": false 00:25:46.763 }, 00:25:46.763 "memory_domains": [ 00:25:46.763 { 00:25:46.763 "dma_device_id": "system", 00:25:46.763 "dma_device_type": 1 00:25:46.763 }, 00:25:46.763 { 00:25:46.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.763 "dma_device_type": 2 00:25:46.763 }, 00:25:46.763 { 00:25:46.763 "dma_device_id": "system", 00:25:46.763 "dma_device_type": 1 00:25:46.763 }, 00:25:46.763 { 00:25:46.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.763 "dma_device_type": 2 00:25:46.763 } 00:25:46.763 ], 00:25:46.763 "driver_specific": { 00:25:46.763 "raid": { 00:25:46.763 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:46.763 "strip_size_kb": 0, 00:25:46.763 "state": "online", 00:25:46.763 "raid_level": "raid1", 00:25:46.763 "superblock": true, 00:25:46.763 "num_base_bdevs": 2, 00:25:46.763 "num_base_bdevs_discovered": 2, 00:25:46.763 "num_base_bdevs_operational": 2, 00:25:46.763 "base_bdevs_list": [ 00:25:46.763 { 00:25:46.763 "name": "pt1", 00:25:46.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:46.763 "is_configured": true, 00:25:46.763 "data_offset": 256, 00:25:46.763 "data_size": 7936 00:25:46.763 }, 00:25:46.763 { 00:25:46.763 "name": "pt2", 00:25:46.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:46.763 "is_configured": true, 00:25:46.763 "data_offset": 256, 00:25:46.763 "data_size": 7936 00:25:46.763 } 00:25:46.763 ] 00:25:46.763 } 00:25:46.763 } 00:25:46.763 }' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:46.763 pt2' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.763 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:46.763 [2024-12-06 13:19:53.273032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a45857d2-921e-4420-8399-c51389fd21f0 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a45857d2-921e-4420-8399-c51389fd21f0 ']' 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 [2024-12-06 13:19:53.312702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.022 [2024-12-06 13:19:53.312735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.022 [2024-12-06 13:19:53.312852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.022 [2024-12-06 13:19:53.312937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.022 [2024-12-06 13:19:53.312960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.022 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.022 [2024-12-06 13:19:53.444767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:47.022 [2024-12-06 13:19:53.447333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:47.022 [2024-12-06 13:19:53.447436] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:47.022 [2024-12-06 13:19:53.447532] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:47.023 [2024-12-06 13:19:53.447559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.023 [2024-12-06 13:19:53.447575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:47.023 request: 00:25:47.023 { 00:25:47.023 "name": "raid_bdev1", 00:25:47.023 "raid_level": "raid1", 00:25:47.023 "base_bdevs": [ 00:25:47.023 "malloc1", 00:25:47.023 "malloc2" 00:25:47.023 ], 00:25:47.023 "superblock": false, 00:25:47.023 "method": "bdev_raid_create", 00:25:47.023 "req_id": 1 00:25:47.023 } 00:25:47.023 Got JSON-RPC error response 00:25:47.023 response: 00:25:47.023 { 00:25:47.023 "code": -17, 00:25:47.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:47.023 } 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.023 [2024-12-06 13:19:53.504763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:47.023 [2024-12-06 13:19:53.504833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.023 [2024-12-06 13:19:53.504859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:47.023 [2024-12-06 13:19:53.504877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.023 [2024-12-06 13:19:53.507604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.023 [2024-12-06 13:19:53.507655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:47.023 [2024-12-06 13:19:53.507719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:47.023 [2024-12-06 13:19:53.507811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:47.023 pt1 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.023 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.282 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.282 "name": "raid_bdev1", 00:25:47.282 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:47.282 "strip_size_kb": 0, 00:25:47.282 "state": "configuring", 00:25:47.282 "raid_level": "raid1", 00:25:47.282 "superblock": true, 00:25:47.282 "num_base_bdevs": 2, 00:25:47.282 "num_base_bdevs_discovered": 1, 00:25:47.282 "num_base_bdevs_operational": 2, 00:25:47.282 "base_bdevs_list": [ 00:25:47.282 { 00:25:47.282 "name": "pt1", 00:25:47.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:47.282 "is_configured": true, 00:25:47.282 "data_offset": 256, 00:25:47.282 "data_size": 7936 00:25:47.282 }, 00:25:47.282 { 00:25:47.282 "name": null, 00:25:47.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:47.282 "is_configured": false, 00:25:47.282 "data_offset": 256, 00:25:47.282 "data_size": 7936 00:25:47.282 } 00:25:47.282 ] 00:25:47.282 }' 00:25:47.282 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.282 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 [2024-12-06 13:19:54.004926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:47.599 [2024-12-06 13:19:54.005053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.599 [2024-12-06 13:19:54.005088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:47.599 [2024-12-06 13:19:54.005119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.599 [2024-12-06 13:19:54.005339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.599 [2024-12-06 13:19:54.005383] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:47.599 [2024-12-06 13:19:54.005472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:47.599 [2024-12-06 13:19:54.005510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:47.599 [2024-12-06 13:19:54.005629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:47.599 [2024-12-06 13:19:54.005661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:47.599 [2024-12-06 13:19:54.005752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:47.599 [2024-12-06 13:19:54.005854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:47.599 [2024-12-06 13:19:54.005876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:47.599 [2024-12-06 13:19:54.005966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.599 pt2 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.599 "name": "raid_bdev1", 00:25:47.599 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:47.599 "strip_size_kb": 0, 00:25:47.599 "state": "online", 00:25:47.599 "raid_level": "raid1", 00:25:47.599 "superblock": true, 00:25:47.599 "num_base_bdevs": 2, 00:25:47.599 "num_base_bdevs_discovered": 2, 00:25:47.599 "num_base_bdevs_operational": 2, 00:25:47.599 "base_bdevs_list": [ 00:25:47.599 { 00:25:47.599 "name": "pt1", 00:25:47.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:47.599 "is_configured": true, 00:25:47.599 "data_offset": 256, 00:25:47.599 "data_size": 7936 00:25:47.599 }, 00:25:47.599 { 00:25:47.599 "name": "pt2", 00:25:47.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:47.599 "is_configured": true, 00:25:47.599 "data_offset": 256, 00:25:47.599 "data_size": 7936 00:25:47.599 } 00:25:47.599 ] 00:25:47.599 }' 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.599 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.185 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.186 [2024-12-06 13:19:54.513391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.186 "name": "raid_bdev1", 00:25:48.186 "aliases": [ 00:25:48.186 "a45857d2-921e-4420-8399-c51389fd21f0" 00:25:48.186 ], 00:25:48.186 "product_name": "Raid Volume", 00:25:48.186 "block_size": 4128, 00:25:48.186 "num_blocks": 7936, 00:25:48.186 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:48.186 "md_size": 32, 00:25:48.186 "md_interleave": true, 00:25:48.186 "dif_type": 0, 00:25:48.186 "assigned_rate_limits": { 00:25:48.186 "rw_ios_per_sec": 0, 00:25:48.186 "rw_mbytes_per_sec": 0, 00:25:48.186 "r_mbytes_per_sec": 0, 00:25:48.186 "w_mbytes_per_sec": 0 00:25:48.186 }, 00:25:48.186 "claimed": false, 00:25:48.186 "zoned": false, 00:25:48.186 "supported_io_types": { 00:25:48.186 "read": true, 00:25:48.186 "write": true, 00:25:48.186 "unmap": false, 00:25:48.186 "flush": false, 00:25:48.186 "reset": true, 00:25:48.186 "nvme_admin": false, 00:25:48.186 "nvme_io": false, 00:25:48.186 "nvme_io_md": false, 00:25:48.186 "write_zeroes": true, 00:25:48.186 "zcopy": false, 00:25:48.186 "get_zone_info": false, 00:25:48.186 "zone_management": false, 00:25:48.186 "zone_append": false, 00:25:48.186 "compare": false, 00:25:48.186 "compare_and_write": false, 00:25:48.186 "abort": false, 00:25:48.186 "seek_hole": false, 00:25:48.186 "seek_data": false, 00:25:48.186 "copy": false, 00:25:48.186 "nvme_iov_md": false 00:25:48.186 }, 00:25:48.186 "memory_domains": [ 00:25:48.186 { 00:25:48.186 "dma_device_id": "system", 00:25:48.186 "dma_device_type": 1 00:25:48.186 }, 00:25:48.186 { 00:25:48.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.186 "dma_device_type": 2 00:25:48.186 }, 00:25:48.186 { 00:25:48.186 "dma_device_id": "system", 00:25:48.186 "dma_device_type": 1 00:25:48.186 }, 00:25:48.186 { 00:25:48.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.186 "dma_device_type": 2 00:25:48.186 } 00:25:48.186 ], 00:25:48.186 "driver_specific": { 00:25:48.186 "raid": { 00:25:48.186 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:48.186 "strip_size_kb": 0, 00:25:48.186 "state": "online", 00:25:48.186 "raid_level": "raid1", 00:25:48.186 "superblock": true, 00:25:48.186 "num_base_bdevs": 2, 00:25:48.186 "num_base_bdevs_discovered": 2, 00:25:48.186 "num_base_bdevs_operational": 2, 00:25:48.186 "base_bdevs_list": [ 00:25:48.186 { 00:25:48.186 "name": "pt1", 00:25:48.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:48.186 "is_configured": true, 00:25:48.186 "data_offset": 256, 00:25:48.186 "data_size": 7936 00:25:48.186 }, 00:25:48.186 { 00:25:48.186 "name": "pt2", 00:25:48.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:48.186 "is_configured": true, 00:25:48.186 "data_offset": 256, 00:25:48.186 "data_size": 7936 00:25:48.186 } 00:25:48.186 ] 00:25:48.186 } 00:25:48.186 } 00:25:48.186 }' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:48.186 pt2' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.186 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:48.446 [2024-12-06 13:19:54.753404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a45857d2-921e-4420-8399-c51389fd21f0 '!=' a45857d2-921e-4420-8399-c51389fd21f0 ']' 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.446 [2024-12-06 13:19:54.801192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.446 "name": "raid_bdev1", 00:25:48.446 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:48.446 "strip_size_kb": 0, 00:25:48.446 "state": "online", 00:25:48.446 "raid_level": "raid1", 00:25:48.446 "superblock": true, 00:25:48.446 "num_base_bdevs": 2, 00:25:48.446 "num_base_bdevs_discovered": 1, 00:25:48.446 "num_base_bdevs_operational": 1, 00:25:48.446 "base_bdevs_list": [ 00:25:48.446 { 00:25:48.446 "name": null, 00:25:48.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.446 "is_configured": false, 00:25:48.446 "data_offset": 0, 00:25:48.446 "data_size": 7936 00:25:48.446 }, 00:25:48.446 { 00:25:48.446 "name": "pt2", 00:25:48.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:48.446 "is_configured": true, 00:25:48.446 "data_offset": 256, 00:25:48.446 "data_size": 7936 00:25:48.446 } 00:25:48.446 ] 00:25:48.446 }' 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.446 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 [2024-12-06 13:19:55.337286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:49.015 [2024-12-06 13:19:55.337326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.015 [2024-12-06 13:19:55.337429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.015 [2024-12-06 13:19:55.337515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.015 [2024-12-06 13:19:55.337538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 [2024-12-06 13:19:55.409269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:49.015 [2024-12-06 13:19:55.409341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.015 [2024-12-06 13:19:55.409368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:49.015 [2024-12-06 13:19:55.409386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.015 [2024-12-06 13:19:55.412129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.015 [2024-12-06 13:19:55.412186] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:49.015 [2024-12-06 13:19:55.412259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:49.015 [2024-12-06 13:19:55.412323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:49.015 [2024-12-06 13:19:55.412416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:49.015 [2024-12-06 13:19:55.412439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:49.015 [2024-12-06 13:19:55.412568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:49.015 [2024-12-06 13:19:55.412675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:49.015 [2024-12-06 13:19:55.412690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:49.015 [2024-12-06 13:19:55.412778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.015 pt2 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.015 "name": "raid_bdev1", 00:25:49.015 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:49.015 "strip_size_kb": 0, 00:25:49.015 "state": "online", 00:25:49.015 "raid_level": "raid1", 00:25:49.015 "superblock": true, 00:25:49.015 "num_base_bdevs": 2, 00:25:49.015 "num_base_bdevs_discovered": 1, 00:25:49.015 "num_base_bdevs_operational": 1, 00:25:49.015 "base_bdevs_list": [ 00:25:49.015 { 00:25:49.015 "name": null, 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.015 "is_configured": false, 00:25:49.015 "data_offset": 256, 00:25:49.015 "data_size": 7936 00:25:49.015 }, 00:25:49.015 { 00:25:49.015 "name": "pt2", 00:25:49.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.015 "is_configured": true, 00:25:49.015 "data_offset": 256, 00:25:49.015 "data_size": 7936 00:25:49.015 } 00:25:49.015 ] 00:25:49.015 }' 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.015 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:49.582 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.582 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.582 [2024-12-06 13:19:55.933425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:49.583 [2024-12-06 13:19:55.933479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.583 [2024-12-06 13:19:55.933576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.583 [2024-12-06 13:19:55.933651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.583 [2024-12-06 13:19:55.933668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.583 [2024-12-06 13:19:55.989437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:49.583 [2024-12-06 13:19:55.989518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.583 [2024-12-06 13:19:55.989552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:49.583 [2024-12-06 13:19:55.989568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.583 [2024-12-06 13:19:55.992124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.583 [2024-12-06 13:19:55.992168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:49.583 [2024-12-06 13:19:55.992240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:49.583 [2024-12-06 13:19:55.992310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:49.583 [2024-12-06 13:19:55.992460] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:49.583 [2024-12-06 13:19:55.992479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:49.583 [2024-12-06 13:19:55.992505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:49.583 [2024-12-06 13:19:55.992576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:49.583 [2024-12-06 13:19:55.992677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:49.583 [2024-12-06 13:19:55.992704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:49.583 [2024-12-06 13:19:55.992797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:49.583 [2024-12-06 13:19:55.992881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:49.583 [2024-12-06 13:19:55.992901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:49.583 [2024-12-06 13:19:55.992994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.583 pt1 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.583 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:49.583 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.583 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.583 "name": "raid_bdev1", 00:25:49.583 "uuid": "a45857d2-921e-4420-8399-c51389fd21f0", 00:25:49.583 "strip_size_kb": 0, 00:25:49.583 "state": "online", 00:25:49.583 "raid_level": "raid1", 00:25:49.583 "superblock": true, 00:25:49.583 "num_base_bdevs": 2, 00:25:49.583 "num_base_bdevs_discovered": 1, 00:25:49.583 "num_base_bdevs_operational": 1, 00:25:49.583 "base_bdevs_list": [ 00:25:49.583 { 00:25:49.583 "name": null, 00:25:49.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.583 "is_configured": false, 00:25:49.583 "data_offset": 256, 00:25:49.583 "data_size": 7936 00:25:49.583 }, 00:25:49.583 { 00:25:49.583 "name": "pt2", 00:25:49.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:49.583 "is_configured": true, 00:25:49.583 "data_offset": 256, 00:25:49.583 "data_size": 7936 00:25:49.583 } 00:25:49.583 ] 00:25:49.583 }' 00:25:49.583 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.583 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.152 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:50.153 [2024-12-06 13:19:56.545918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a45857d2-921e-4420-8399-c51389fd21f0 '!=' a45857d2-921e-4420-8399-c51389fd21f0 ']' 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89512 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89512 ']' 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89512 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89512 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.153 killing process with pid 89512 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89512' 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89512 00:25:50.153 [2024-12-06 13:19:56.618031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:50.153 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89512 00:25:50.153 [2024-12-06 13:19:56.618149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.153 [2024-12-06 13:19:56.618218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.153 [2024-12-06 13:19:56.618243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:50.412 [2024-12-06 13:19:56.801840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:51.351 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:51.351 00:25:51.351 real 0m6.545s 00:25:51.351 user 0m10.296s 00:25:51.351 sys 0m0.989s 00:25:51.351 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.351 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.351 ************************************ 00:25:51.351 END TEST raid_superblock_test_md_interleaved 00:25:51.351 ************************************ 00:25:51.610 13:19:57 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:51.610 13:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:51.610 13:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.610 13:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:51.610 ************************************ 00:25:51.610 START TEST raid_rebuild_test_sb_md_interleaved 00:25:51.610 ************************************ 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:51.610 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89835 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89835 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89835 ']' 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.611 13:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:51.611 [2024-12-06 13:19:58.023784] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:25:51.611 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:51.611 Zero copy mechanism will not be used. 00:25:51.611 [2024-12-06 13:19:58.023971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89835 ] 00:25:51.871 [2024-12-06 13:19:58.213898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.871 [2024-12-06 13:19:58.367760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.130 [2024-12-06 13:19:58.598899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.130 [2024-12-06 13:19:58.598971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 BaseBdev1_malloc 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 [2024-12-06 13:19:59.020933] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:52.699 [2024-12-06 13:19:59.021005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.699 [2024-12-06 13:19:59.021038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:52.699 [2024-12-06 13:19:59.021057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.699 [2024-12-06 13:19:59.023516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.699 [2024-12-06 13:19:59.023572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:52.699 BaseBdev1 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 BaseBdev2_malloc 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 [2024-12-06 13:19:59.068727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:52.699 [2024-12-06 13:19:59.068802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.699 [2024-12-06 13:19:59.068831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:52.699 [2024-12-06 13:19:59.068849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.699 [2024-12-06 13:19:59.071303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.699 [2024-12-06 13:19:59.071351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:52.699 BaseBdev2 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 spare_malloc 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 spare_delay 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 [2024-12-06 13:19:59.140902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:52.699 [2024-12-06 13:19:59.140976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.699 [2024-12-06 13:19:59.141009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:52.699 [2024-12-06 13:19:59.141032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.699 [2024-12-06 13:19:59.143628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.699 [2024-12-06 13:19:59.143678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:52.699 spare 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.699 [2024-12-06 13:19:59.148957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:52.699 [2024-12-06 13:19:59.151542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.699 [2024-12-06 13:19:59.151825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:52.699 [2024-12-06 13:19:59.151858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:52.699 [2024-12-06 13:19:59.151954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:52.699 [2024-12-06 13:19:59.152076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:52.699 [2024-12-06 13:19:59.152090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:52.699 [2024-12-06 13:19:59.152184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.699 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.700 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:52.700 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.700 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.700 "name": "raid_bdev1", 00:25:52.700 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:52.700 "strip_size_kb": 0, 00:25:52.700 "state": "online", 00:25:52.700 "raid_level": "raid1", 00:25:52.700 "superblock": true, 00:25:52.700 "num_base_bdevs": 2, 00:25:52.700 "num_base_bdevs_discovered": 2, 00:25:52.700 "num_base_bdevs_operational": 2, 00:25:52.700 "base_bdevs_list": [ 00:25:52.700 { 00:25:52.700 "name": "BaseBdev1", 00:25:52.700 "uuid": "7a9aa8f3-4159-57d5-a5e0-77cc87b882c4", 00:25:52.700 "is_configured": true, 00:25:52.700 "data_offset": 256, 00:25:52.700 "data_size": 7936 00:25:52.700 }, 00:25:52.700 { 00:25:52.700 "name": "BaseBdev2", 00:25:52.700 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:52.700 "is_configured": true, 00:25:52.700 "data_offset": 256, 00:25:52.700 "data_size": 7936 00:25:52.700 } 00:25:52.700 ] 00:25:52.700 }' 00:25:52.700 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.700 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.267 [2024-12-06 13:19:59.633502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.267 [2024-12-06 13:19:59.733061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.267 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.268 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.268 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.268 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.268 "name": "raid_bdev1", 00:25:53.268 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:53.268 "strip_size_kb": 0, 00:25:53.268 "state": "online", 00:25:53.268 "raid_level": "raid1", 00:25:53.268 "superblock": true, 00:25:53.268 "num_base_bdevs": 2, 00:25:53.268 "num_base_bdevs_discovered": 1, 00:25:53.268 "num_base_bdevs_operational": 1, 00:25:53.268 "base_bdevs_list": [ 00:25:53.268 { 00:25:53.268 "name": null, 00:25:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.268 "is_configured": false, 00:25:53.268 "data_offset": 0, 00:25:53.268 "data_size": 7936 00:25:53.268 }, 00:25:53.268 { 00:25:53.268 "name": "BaseBdev2", 00:25:53.268 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:53.268 "is_configured": true, 00:25:53.268 "data_offset": 256, 00:25:53.268 "data_size": 7936 00:25:53.268 } 00:25:53.268 ] 00:25:53.268 }' 00:25:53.268 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.268 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.835 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:53.835 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.835 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:53.835 [2024-12-06 13:20:00.201263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:53.835 [2024-12-06 13:20:00.217763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:53.835 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.835 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:53.835 [2024-12-06 13:20:00.220261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:54.781 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.782 "name": "raid_bdev1", 00:25:54.782 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:54.782 "strip_size_kb": 0, 00:25:54.782 "state": "online", 00:25:54.782 "raid_level": "raid1", 00:25:54.782 "superblock": true, 00:25:54.782 "num_base_bdevs": 2, 00:25:54.782 "num_base_bdevs_discovered": 2, 00:25:54.782 "num_base_bdevs_operational": 2, 00:25:54.782 "process": { 00:25:54.782 "type": "rebuild", 00:25:54.782 "target": "spare", 00:25:54.782 "progress": { 00:25:54.782 "blocks": 2560, 00:25:54.782 "percent": 32 00:25:54.782 } 00:25:54.782 }, 00:25:54.782 "base_bdevs_list": [ 00:25:54.782 { 00:25:54.782 "name": "spare", 00:25:54.782 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:54.782 "is_configured": true, 00:25:54.782 "data_offset": 256, 00:25:54.782 "data_size": 7936 00:25:54.782 }, 00:25:54.782 { 00:25:54.782 "name": "BaseBdev2", 00:25:54.782 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:54.782 "is_configured": true, 00:25:54.782 "data_offset": 256, 00:25:54.782 "data_size": 7936 00:25:54.782 } 00:25:54.782 ] 00:25:54.782 }' 00:25:54.782 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.041 [2024-12-06 13:20:01.377553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:55.041 [2024-12-06 13:20:01.429550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:55.041 [2024-12-06 13:20:01.429682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.041 [2024-12-06 13:20:01.429709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:55.041 [2024-12-06 13:20:01.429729] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:55.041 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.042 "name": "raid_bdev1", 00:25:55.042 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:55.042 "strip_size_kb": 0, 00:25:55.042 "state": "online", 00:25:55.042 "raid_level": "raid1", 00:25:55.042 "superblock": true, 00:25:55.042 "num_base_bdevs": 2, 00:25:55.042 "num_base_bdevs_discovered": 1, 00:25:55.042 "num_base_bdevs_operational": 1, 00:25:55.042 "base_bdevs_list": [ 00:25:55.042 { 00:25:55.042 "name": null, 00:25:55.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.042 "is_configured": false, 00:25:55.042 "data_offset": 0, 00:25:55.042 "data_size": 7936 00:25:55.042 }, 00:25:55.042 { 00:25:55.042 "name": "BaseBdev2", 00:25:55.042 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:55.042 "is_configured": true, 00:25:55.042 "data_offset": 256, 00:25:55.042 "data_size": 7936 00:25:55.042 } 00:25:55.042 ] 00:25:55.042 }' 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.042 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:55.610 "name": "raid_bdev1", 00:25:55.610 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:55.610 "strip_size_kb": 0, 00:25:55.610 "state": "online", 00:25:55.610 "raid_level": "raid1", 00:25:55.610 "superblock": true, 00:25:55.610 "num_base_bdevs": 2, 00:25:55.610 "num_base_bdevs_discovered": 1, 00:25:55.610 "num_base_bdevs_operational": 1, 00:25:55.610 "base_bdevs_list": [ 00:25:55.610 { 00:25:55.610 "name": null, 00:25:55.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.610 "is_configured": false, 00:25:55.610 "data_offset": 0, 00:25:55.610 "data_size": 7936 00:25:55.610 }, 00:25:55.610 { 00:25:55.610 "name": "BaseBdev2", 00:25:55.610 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:55.610 "is_configured": true, 00:25:55.610 "data_offset": 256, 00:25:55.610 "data_size": 7936 00:25:55.610 } 00:25:55.610 ] 00:25:55.610 }' 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.610 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:55.610 [2024-12-06 13:20:02.121939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:55.868 [2024-12-06 13:20:02.138544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:55.868 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.868 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:55.868 [2024-12-06 13:20:02.141056] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:56.802 "name": "raid_bdev1", 00:25:56.802 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:56.802 "strip_size_kb": 0, 00:25:56.802 "state": "online", 00:25:56.802 "raid_level": "raid1", 00:25:56.802 "superblock": true, 00:25:56.802 "num_base_bdevs": 2, 00:25:56.802 "num_base_bdevs_discovered": 2, 00:25:56.802 "num_base_bdevs_operational": 2, 00:25:56.802 "process": { 00:25:56.802 "type": "rebuild", 00:25:56.802 "target": "spare", 00:25:56.802 "progress": { 00:25:56.802 "blocks": 2560, 00:25:56.802 "percent": 32 00:25:56.802 } 00:25:56.802 }, 00:25:56.802 "base_bdevs_list": [ 00:25:56.802 { 00:25:56.802 "name": "spare", 00:25:56.802 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:56.802 "is_configured": true, 00:25:56.802 "data_offset": 256, 00:25:56.802 "data_size": 7936 00:25:56.802 }, 00:25:56.802 { 00:25:56.802 "name": "BaseBdev2", 00:25:56.802 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:56.802 "is_configured": true, 00:25:56.802 "data_offset": 256, 00:25:56.802 "data_size": 7936 00:25:56.802 } 00:25:56.802 ] 00:25:56.802 }' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:56.802 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=815 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.802 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:56.803 "name": "raid_bdev1", 00:25:56.803 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:56.803 "strip_size_kb": 0, 00:25:56.803 "state": "online", 00:25:56.803 "raid_level": "raid1", 00:25:56.803 "superblock": true, 00:25:56.803 "num_base_bdevs": 2, 00:25:56.803 "num_base_bdevs_discovered": 2, 00:25:56.803 "num_base_bdevs_operational": 2, 00:25:56.803 "process": { 00:25:56.803 "type": "rebuild", 00:25:56.803 "target": "spare", 00:25:56.803 "progress": { 00:25:56.803 "blocks": 2816, 00:25:56.803 "percent": 35 00:25:56.803 } 00:25:56.803 }, 00:25:56.803 "base_bdevs_list": [ 00:25:56.803 { 00:25:56.803 "name": "spare", 00:25:56.803 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:56.803 "is_configured": true, 00:25:56.803 "data_offset": 256, 00:25:56.803 "data_size": 7936 00:25:56.803 }, 00:25:56.803 { 00:25:56.803 "name": "BaseBdev2", 00:25:56.803 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:56.803 "is_configured": true, 00:25:56.803 "data_offset": 256, 00:25:56.803 "data_size": 7936 00:25:56.803 } 00:25:56.803 ] 00:25:56.803 }' 00:25:56.803 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:57.060 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.060 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:57.060 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.060 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:57.992 "name": "raid_bdev1", 00:25:57.992 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:57.992 "strip_size_kb": 0, 00:25:57.992 "state": "online", 00:25:57.992 "raid_level": "raid1", 00:25:57.992 "superblock": true, 00:25:57.992 "num_base_bdevs": 2, 00:25:57.992 "num_base_bdevs_discovered": 2, 00:25:57.992 "num_base_bdevs_operational": 2, 00:25:57.992 "process": { 00:25:57.992 "type": "rebuild", 00:25:57.992 "target": "spare", 00:25:57.992 "progress": { 00:25:57.992 "blocks": 5632, 00:25:57.992 "percent": 70 00:25:57.992 } 00:25:57.992 }, 00:25:57.992 "base_bdevs_list": [ 00:25:57.992 { 00:25:57.992 "name": "spare", 00:25:57.992 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:57.992 "is_configured": true, 00:25:57.992 "data_offset": 256, 00:25:57.992 "data_size": 7936 00:25:57.992 }, 00:25:57.992 { 00:25:57.992 "name": "BaseBdev2", 00:25:57.992 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:57.992 "is_configured": true, 00:25:57.992 "data_offset": 256, 00:25:57.992 "data_size": 7936 00:25:57.992 } 00:25:57.992 ] 00:25:57.992 }' 00:25:57.992 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:58.249 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.249 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:58.249 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.249 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:58.814 [2024-12-06 13:20:05.263045] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:58.814 [2024-12-06 13:20:05.263138] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:58.814 [2024-12-06 13:20:05.263289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.071 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:59.329 "name": "raid_bdev1", 00:25:59.329 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:59.329 "strip_size_kb": 0, 00:25:59.329 "state": "online", 00:25:59.329 "raid_level": "raid1", 00:25:59.329 "superblock": true, 00:25:59.329 "num_base_bdevs": 2, 00:25:59.329 "num_base_bdevs_discovered": 2, 00:25:59.329 "num_base_bdevs_operational": 2, 00:25:59.329 "base_bdevs_list": [ 00:25:59.329 { 00:25:59.329 "name": "spare", 00:25:59.329 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:59.329 "is_configured": true, 00:25:59.329 "data_offset": 256, 00:25:59.329 "data_size": 7936 00:25:59.329 }, 00:25:59.329 { 00:25:59.329 "name": "BaseBdev2", 00:25:59.329 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:59.329 "is_configured": true, 00:25:59.329 "data_offset": 256, 00:25:59.329 "data_size": 7936 00:25:59.329 } 00:25:59.329 ] 00:25:59.329 }' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:59.329 "name": "raid_bdev1", 00:25:59.329 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:59.329 "strip_size_kb": 0, 00:25:59.329 "state": "online", 00:25:59.329 "raid_level": "raid1", 00:25:59.329 "superblock": true, 00:25:59.329 "num_base_bdevs": 2, 00:25:59.329 "num_base_bdevs_discovered": 2, 00:25:59.329 "num_base_bdevs_operational": 2, 00:25:59.329 "base_bdevs_list": [ 00:25:59.329 { 00:25:59.329 "name": "spare", 00:25:59.329 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:59.329 "is_configured": true, 00:25:59.329 "data_offset": 256, 00:25:59.329 "data_size": 7936 00:25:59.329 }, 00:25:59.329 { 00:25:59.329 "name": "BaseBdev2", 00:25:59.329 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:59.329 "is_configured": true, 00:25:59.329 "data_offset": 256, 00:25:59.329 "data_size": 7936 00:25:59.329 } 00:25:59.329 ] 00:25:59.329 }' 00:25:59.329 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.587 "name": "raid_bdev1", 00:25:59.587 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:25:59.587 "strip_size_kb": 0, 00:25:59.587 "state": "online", 00:25:59.587 "raid_level": "raid1", 00:25:59.587 "superblock": true, 00:25:59.587 "num_base_bdevs": 2, 00:25:59.587 "num_base_bdevs_discovered": 2, 00:25:59.587 "num_base_bdevs_operational": 2, 00:25:59.587 "base_bdevs_list": [ 00:25:59.587 { 00:25:59.587 "name": "spare", 00:25:59.587 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:25:59.587 "is_configured": true, 00:25:59.587 "data_offset": 256, 00:25:59.587 "data_size": 7936 00:25:59.587 }, 00:25:59.587 { 00:25:59.587 "name": "BaseBdev2", 00:25:59.587 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:25:59.587 "is_configured": true, 00:25:59.587 "data_offset": 256, 00:25:59.587 "data_size": 7936 00:25:59.587 } 00:25:59.587 ] 00:25:59.587 }' 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.587 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 [2024-12-06 13:20:06.418840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:00.153 [2024-12-06 13:20:06.418886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:00.153 [2024-12-06 13:20:06.418996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:00.153 [2024-12-06 13:20:06.419090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:00.153 [2024-12-06 13:20:06.419107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 [2024-12-06 13:20:06.486811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:00.153 [2024-12-06 13:20:06.486876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.153 [2024-12-06 13:20:06.486909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:00.153 [2024-12-06 13:20:06.486924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.153 [2024-12-06 13:20:06.489571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.153 [2024-12-06 13:20:06.489616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:00.153 [2024-12-06 13:20:06.489704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:00.153 [2024-12-06 13:20:06.489766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.153 [2024-12-06 13:20:06.489913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.153 spare 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 [2024-12-06 13:20:06.590029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:00.153 [2024-12-06 13:20:06.590065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:26:00.153 [2024-12-06 13:20:06.590182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:00.153 [2024-12-06 13:20:06.590291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:00.153 [2024-12-06 13:20:06.590309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:00.153 [2024-12-06 13:20:06.590441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.153 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.154 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.154 "name": "raid_bdev1", 00:26:00.154 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:00.154 "strip_size_kb": 0, 00:26:00.154 "state": "online", 00:26:00.154 "raid_level": "raid1", 00:26:00.154 "superblock": true, 00:26:00.154 "num_base_bdevs": 2, 00:26:00.154 "num_base_bdevs_discovered": 2, 00:26:00.154 "num_base_bdevs_operational": 2, 00:26:00.154 "base_bdevs_list": [ 00:26:00.154 { 00:26:00.154 "name": "spare", 00:26:00.154 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:26:00.154 "is_configured": true, 00:26:00.154 "data_offset": 256, 00:26:00.154 "data_size": 7936 00:26:00.154 }, 00:26:00.154 { 00:26:00.154 "name": "BaseBdev2", 00:26:00.154 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:00.154 "is_configured": true, 00:26:00.154 "data_offset": 256, 00:26:00.154 "data_size": 7936 00:26:00.154 } 00:26:00.154 ] 00:26:00.154 }' 00:26:00.154 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.154 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:00.787 "name": "raid_bdev1", 00:26:00.787 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:00.787 "strip_size_kb": 0, 00:26:00.787 "state": "online", 00:26:00.787 "raid_level": "raid1", 00:26:00.787 "superblock": true, 00:26:00.787 "num_base_bdevs": 2, 00:26:00.787 "num_base_bdevs_discovered": 2, 00:26:00.787 "num_base_bdevs_operational": 2, 00:26:00.787 "base_bdevs_list": [ 00:26:00.787 { 00:26:00.787 "name": "spare", 00:26:00.787 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:26:00.787 "is_configured": true, 00:26:00.787 "data_offset": 256, 00:26:00.787 "data_size": 7936 00:26:00.787 }, 00:26:00.787 { 00:26:00.787 "name": "BaseBdev2", 00:26:00.787 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:00.787 "is_configured": true, 00:26:00.787 "data_offset": 256, 00:26:00.787 "data_size": 7936 00:26:00.787 } 00:26:00.787 ] 00:26:00.787 }' 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:00.787 [2024-12-06 13:20:07.303146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.787 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.788 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.788 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.788 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.788 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.788 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:01.046 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.046 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.046 "name": "raid_bdev1", 00:26:01.046 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:01.046 "strip_size_kb": 0, 00:26:01.046 "state": "online", 00:26:01.046 "raid_level": "raid1", 00:26:01.046 "superblock": true, 00:26:01.046 "num_base_bdevs": 2, 00:26:01.046 "num_base_bdevs_discovered": 1, 00:26:01.046 "num_base_bdevs_operational": 1, 00:26:01.046 "base_bdevs_list": [ 00:26:01.046 { 00:26:01.046 "name": null, 00:26:01.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.046 "is_configured": false, 00:26:01.046 "data_offset": 0, 00:26:01.046 "data_size": 7936 00:26:01.046 }, 00:26:01.046 { 00:26:01.046 "name": "BaseBdev2", 00:26:01.046 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:01.046 "is_configured": true, 00:26:01.046 "data_offset": 256, 00:26:01.046 "data_size": 7936 00:26:01.046 } 00:26:01.046 ] 00:26:01.046 }' 00:26:01.046 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.046 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:01.304 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:01.304 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.304 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:01.304 [2024-12-06 13:20:07.791302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:01.304 [2024-12-06 13:20:07.791579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:01.304 [2024-12-06 13:20:07.791618] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:01.304 [2024-12-06 13:20:07.791676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:01.304 [2024-12-06 13:20:07.807330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:01.304 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.304 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:01.305 [2024-12-06 13:20:07.809956] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:02.679 "name": "raid_bdev1", 00:26:02.679 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:02.679 "strip_size_kb": 0, 00:26:02.679 "state": "online", 00:26:02.679 "raid_level": "raid1", 00:26:02.679 "superblock": true, 00:26:02.679 "num_base_bdevs": 2, 00:26:02.679 "num_base_bdevs_discovered": 2, 00:26:02.679 "num_base_bdevs_operational": 2, 00:26:02.679 "process": { 00:26:02.679 "type": "rebuild", 00:26:02.679 "target": "spare", 00:26:02.679 "progress": { 00:26:02.679 "blocks": 2560, 00:26:02.679 "percent": 32 00:26:02.679 } 00:26:02.679 }, 00:26:02.679 "base_bdevs_list": [ 00:26:02.679 { 00:26:02.679 "name": "spare", 00:26:02.679 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:26:02.679 "is_configured": true, 00:26:02.679 "data_offset": 256, 00:26:02.679 "data_size": 7936 00:26:02.679 }, 00:26:02.679 { 00:26:02.679 "name": "BaseBdev2", 00:26:02.679 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:02.679 "is_configured": true, 00:26:02.679 "data_offset": 256, 00:26:02.679 "data_size": 7936 00:26:02.679 } 00:26:02.679 ] 00:26:02.679 }' 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.679 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 [2024-12-06 13:20:08.975279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.679 [2024-12-06 13:20:09.018278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:02.679 [2024-12-06 13:20:09.018382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.679 [2024-12-06 13:20:09.018408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:02.679 [2024-12-06 13:20:09.018423] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.679 "name": "raid_bdev1", 00:26:02.679 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:02.679 "strip_size_kb": 0, 00:26:02.679 "state": "online", 00:26:02.679 "raid_level": "raid1", 00:26:02.679 "superblock": true, 00:26:02.679 "num_base_bdevs": 2, 00:26:02.679 "num_base_bdevs_discovered": 1, 00:26:02.679 "num_base_bdevs_operational": 1, 00:26:02.679 "base_bdevs_list": [ 00:26:02.679 { 00:26:02.679 "name": null, 00:26:02.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.679 "is_configured": false, 00:26:02.679 "data_offset": 0, 00:26:02.679 "data_size": 7936 00:26:02.679 }, 00:26:02.679 { 00:26:02.679 "name": "BaseBdev2", 00:26:02.679 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:02.679 "is_configured": true, 00:26:02.679 "data_offset": 256, 00:26:02.679 "data_size": 7936 00:26:02.679 } 00:26:02.679 ] 00:26:02.679 }' 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.679 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.245 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:03.245 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.245 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:03.245 [2024-12-06 13:20:09.554342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:03.245 [2024-12-06 13:20:09.554431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.245 [2024-12-06 13:20:09.554486] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:03.245 [2024-12-06 13:20:09.554508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.245 [2024-12-06 13:20:09.554764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.245 [2024-12-06 13:20:09.554794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:03.245 [2024-12-06 13:20:09.554872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:03.245 [2024-12-06 13:20:09.554907] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:03.245 [2024-12-06 13:20:09.554929] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:03.245 [2024-12-06 13:20:09.554962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:03.245 [2024-12-06 13:20:09.570260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:03.245 spare 00:26:03.245 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.245 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:03.245 [2024-12-06 13:20:09.572732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:04.191 "name": "raid_bdev1", 00:26:04.191 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:04.191 "strip_size_kb": 0, 00:26:04.191 "state": "online", 00:26:04.191 "raid_level": "raid1", 00:26:04.191 "superblock": true, 00:26:04.191 "num_base_bdevs": 2, 00:26:04.191 "num_base_bdevs_discovered": 2, 00:26:04.191 "num_base_bdevs_operational": 2, 00:26:04.191 "process": { 00:26:04.191 "type": "rebuild", 00:26:04.191 "target": "spare", 00:26:04.191 "progress": { 00:26:04.191 "blocks": 2560, 00:26:04.191 "percent": 32 00:26:04.191 } 00:26:04.191 }, 00:26:04.191 "base_bdevs_list": [ 00:26:04.191 { 00:26:04.191 "name": "spare", 00:26:04.191 "uuid": "79c76df4-1a60-5ef4-b0f7-f0e3f42a06b2", 00:26:04.191 "is_configured": true, 00:26:04.191 "data_offset": 256, 00:26:04.191 "data_size": 7936 00:26:04.191 }, 00:26:04.191 { 00:26:04.191 "name": "BaseBdev2", 00:26:04.191 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:04.191 "is_configured": true, 00:26:04.191 "data_offset": 256, 00:26:04.191 "data_size": 7936 00:26:04.191 } 00:26:04.191 ] 00:26:04.191 }' 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.191 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.449 [2024-12-06 13:20:10.745951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:04.449 [2024-12-06 13:20:10.780929] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:04.449 [2024-12-06 13:20:10.781003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.449 [2024-12-06 13:20:10.781031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:04.449 [2024-12-06 13:20:10.781043] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.449 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:04.450 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.450 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.450 "name": "raid_bdev1", 00:26:04.450 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:04.450 "strip_size_kb": 0, 00:26:04.450 "state": "online", 00:26:04.450 "raid_level": "raid1", 00:26:04.450 "superblock": true, 00:26:04.450 "num_base_bdevs": 2, 00:26:04.450 "num_base_bdevs_discovered": 1, 00:26:04.450 "num_base_bdevs_operational": 1, 00:26:04.450 "base_bdevs_list": [ 00:26:04.450 { 00:26:04.450 "name": null, 00:26:04.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.450 "is_configured": false, 00:26:04.450 "data_offset": 0, 00:26:04.450 "data_size": 7936 00:26:04.450 }, 00:26:04.450 { 00:26:04.450 "name": "BaseBdev2", 00:26:04.450 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:04.450 "is_configured": true, 00:26:04.450 "data_offset": 256, 00:26:04.450 "data_size": 7936 00:26:04.450 } 00:26:04.450 ] 00:26:04.450 }' 00:26:04.450 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.450 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.016 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:05.016 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:05.016 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:05.016 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:05.016 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:05.017 "name": "raid_bdev1", 00:26:05.017 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:05.017 "strip_size_kb": 0, 00:26:05.017 "state": "online", 00:26:05.017 "raid_level": "raid1", 00:26:05.017 "superblock": true, 00:26:05.017 "num_base_bdevs": 2, 00:26:05.017 "num_base_bdevs_discovered": 1, 00:26:05.017 "num_base_bdevs_operational": 1, 00:26:05.017 "base_bdevs_list": [ 00:26:05.017 { 00:26:05.017 "name": null, 00:26:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.017 "is_configured": false, 00:26:05.017 "data_offset": 0, 00:26:05.017 "data_size": 7936 00:26:05.017 }, 00:26:05.017 { 00:26:05.017 "name": "BaseBdev2", 00:26:05.017 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:05.017 "is_configured": true, 00:26:05.017 "data_offset": 256, 00:26:05.017 "data_size": 7936 00:26:05.017 } 00:26:05.017 ] 00:26:05.017 }' 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:05.017 [2024-12-06 13:20:11.480354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:05.017 [2024-12-06 13:20:11.480423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.017 [2024-12-06 13:20:11.480470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:05.017 [2024-12-06 13:20:11.480488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.017 [2024-12-06 13:20:11.480710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.017 [2024-12-06 13:20:11.480744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:05.017 [2024-12-06 13:20:11.480818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:05.017 [2024-12-06 13:20:11.480839] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:05.017 [2024-12-06 13:20:11.480854] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:05.017 [2024-12-06 13:20:11.480866] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:05.017 BaseBdev1 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.017 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.392 "name": "raid_bdev1", 00:26:06.392 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:06.392 "strip_size_kb": 0, 00:26:06.392 "state": "online", 00:26:06.392 "raid_level": "raid1", 00:26:06.392 "superblock": true, 00:26:06.392 "num_base_bdevs": 2, 00:26:06.392 "num_base_bdevs_discovered": 1, 00:26:06.392 "num_base_bdevs_operational": 1, 00:26:06.392 "base_bdevs_list": [ 00:26:06.392 { 00:26:06.392 "name": null, 00:26:06.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.392 "is_configured": false, 00:26:06.392 "data_offset": 0, 00:26:06.392 "data_size": 7936 00:26:06.392 }, 00:26:06.392 { 00:26:06.392 "name": "BaseBdev2", 00:26:06.392 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:06.392 "is_configured": true, 00:26:06.392 "data_offset": 256, 00:26:06.392 "data_size": 7936 00:26:06.392 } 00:26:06.392 ] 00:26:06.392 }' 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.392 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:06.656 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.657 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.657 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.657 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:06.657 "name": "raid_bdev1", 00:26:06.657 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:06.657 "strip_size_kb": 0, 00:26:06.657 "state": "online", 00:26:06.657 "raid_level": "raid1", 00:26:06.657 "superblock": true, 00:26:06.657 "num_base_bdevs": 2, 00:26:06.657 "num_base_bdevs_discovered": 1, 00:26:06.657 "num_base_bdevs_operational": 1, 00:26:06.657 "base_bdevs_list": [ 00:26:06.657 { 00:26:06.657 "name": null, 00:26:06.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.657 "is_configured": false, 00:26:06.657 "data_offset": 0, 00:26:06.657 "data_size": 7936 00:26:06.657 }, 00:26:06.657 { 00:26:06.657 "name": "BaseBdev2", 00:26:06.657 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:06.657 "is_configured": true, 00:26:06.657 "data_offset": 256, 00:26:06.657 "data_size": 7936 00:26:06.657 } 00:26:06.657 ] 00:26:06.657 }' 00:26:06.657 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 [2024-12-06 13:20:13.092898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.657 [2024-12-06 13:20:13.093120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:06.657 [2024-12-06 13:20:13.093149] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:06.657 request: 00:26:06.657 { 00:26:06.657 "base_bdev": "BaseBdev1", 00:26:06.657 "raid_bdev": "raid_bdev1", 00:26:06.657 "method": "bdev_raid_add_base_bdev", 00:26:06.657 "req_id": 1 00:26:06.657 } 00:26:06.657 Got JSON-RPC error response 00:26:06.657 response: 00:26:06.657 { 00:26:06.657 "code": -22, 00:26:06.657 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:06.657 } 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:06.657 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.592 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.593 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:07.850 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.850 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.850 "name": "raid_bdev1", 00:26:07.850 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:07.850 "strip_size_kb": 0, 00:26:07.850 "state": "online", 00:26:07.850 "raid_level": "raid1", 00:26:07.850 "superblock": true, 00:26:07.850 "num_base_bdevs": 2, 00:26:07.850 "num_base_bdevs_discovered": 1, 00:26:07.850 "num_base_bdevs_operational": 1, 00:26:07.850 "base_bdevs_list": [ 00:26:07.850 { 00:26:07.850 "name": null, 00:26:07.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.850 "is_configured": false, 00:26:07.851 "data_offset": 0, 00:26:07.851 "data_size": 7936 00:26:07.851 }, 00:26:07.851 { 00:26:07.851 "name": "BaseBdev2", 00:26:07.851 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:07.851 "is_configured": true, 00:26:07.851 "data_offset": 256, 00:26:07.851 "data_size": 7936 00:26:07.851 } 00:26:07.851 ] 00:26:07.851 }' 00:26:07.851 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.851 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.108 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:08.367 "name": "raid_bdev1", 00:26:08.367 "uuid": "248745ba-8bf3-488a-88b9-3ab4f8c894aa", 00:26:08.367 "strip_size_kb": 0, 00:26:08.367 "state": "online", 00:26:08.367 "raid_level": "raid1", 00:26:08.367 "superblock": true, 00:26:08.367 "num_base_bdevs": 2, 00:26:08.367 "num_base_bdevs_discovered": 1, 00:26:08.367 "num_base_bdevs_operational": 1, 00:26:08.367 "base_bdevs_list": [ 00:26:08.367 { 00:26:08.367 "name": null, 00:26:08.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.367 "is_configured": false, 00:26:08.367 "data_offset": 0, 00:26:08.367 "data_size": 7936 00:26:08.367 }, 00:26:08.367 { 00:26:08.367 "name": "BaseBdev2", 00:26:08.367 "uuid": "379e6c6b-6058-56b7-a7df-a79856bcaba4", 00:26:08.367 "is_configured": true, 00:26:08.367 "data_offset": 256, 00:26:08.367 "data_size": 7936 00:26:08.367 } 00:26:08.367 ] 00:26:08.367 }' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89835 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89835 ']' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89835 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89835 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.367 killing process with pid 89835 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89835' 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89835 00:26:08.367 Received shutdown signal, test time was about 60.000000 seconds 00:26:08.367 00:26:08.367 Latency(us) 00:26:08.367 [2024-12-06T13:20:14.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.367 [2024-12-06T13:20:14.896Z] =================================================================================================================== 00:26:08.367 [2024-12-06T13:20:14.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.367 [2024-12-06 13:20:14.784107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:08.367 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89835 00:26:08.367 [2024-12-06 13:20:14.784268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.367 [2024-12-06 13:20:14.784335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.367 [2024-12-06 13:20:14.784362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:08.625 [2024-12-06 13:20:15.051209] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.558 ************************************ 00:26:09.558 END TEST raid_rebuild_test_sb_md_interleaved 00:26:09.558 ************************************ 00:26:09.558 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:26:09.558 00:26:09.558 real 0m18.158s 00:26:09.558 user 0m24.630s 00:26:09.558 sys 0m1.371s 00:26:09.558 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.558 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:26:09.816 13:20:16 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:26:09.816 13:20:16 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:26:09.816 13:20:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89835 ']' 00:26:09.816 13:20:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89835 00:26:09.816 13:20:16 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:26:09.816 00:26:09.816 real 13m17.520s 00:26:09.816 user 18m36.882s 00:26:09.816 sys 1m52.763s 00:26:09.816 13:20:16 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.816 ************************************ 00:26:09.816 END TEST bdev_raid 00:26:09.816 ************************************ 00:26:09.816 13:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.816 13:20:16 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:09.816 13:20:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.816 13:20:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.816 13:20:16 -- common/autotest_common.sh@10 -- # set +x 00:26:09.816 ************************************ 00:26:09.816 START TEST spdkcli_raid 00:26:09.816 ************************************ 00:26:09.816 13:20:16 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:09.816 * Looking for test storage... 00:26:09.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:09.816 13:20:16 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:09.816 13:20:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:26:09.816 13:20:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:09.816 13:20:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:26:09.816 13:20:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.074 13:20:16 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:26:10.074 13:20:16 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.074 13:20:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:10.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.074 --rc genhtml_branch_coverage=1 00:26:10.074 --rc genhtml_function_coverage=1 00:26:10.074 --rc genhtml_legend=1 00:26:10.074 --rc geninfo_all_blocks=1 00:26:10.074 --rc geninfo_unexecuted_blocks=1 00:26:10.074 00:26:10.074 ' 00:26:10.074 13:20:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:10.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.074 --rc genhtml_branch_coverage=1 00:26:10.074 --rc genhtml_function_coverage=1 00:26:10.074 --rc genhtml_legend=1 00:26:10.074 --rc geninfo_all_blocks=1 00:26:10.074 --rc geninfo_unexecuted_blocks=1 00:26:10.074 00:26:10.075 ' 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:10.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.075 --rc genhtml_branch_coverage=1 00:26:10.075 --rc genhtml_function_coverage=1 00:26:10.075 --rc genhtml_legend=1 00:26:10.075 --rc geninfo_all_blocks=1 00:26:10.075 --rc geninfo_unexecuted_blocks=1 00:26:10.075 00:26:10.075 ' 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:10.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.075 --rc genhtml_branch_coverage=1 00:26:10.075 --rc genhtml_function_coverage=1 00:26:10.075 --rc genhtml_legend=1 00:26:10.075 --rc geninfo_all_blocks=1 00:26:10.075 --rc geninfo_unexecuted_blocks=1 00:26:10.075 00:26:10.075 ' 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:26:10.075 13:20:16 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90516 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:10.075 13:20:16 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90516 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90516 ']' 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.075 13:20:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.075 [2024-12-06 13:20:16.500271] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:10.075 [2024-12-06 13:20:16.500441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90516 ] 00:26:10.333 [2024-12-06 13:20:16.689969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:10.333 [2024-12-06 13:20:16.845788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.333 [2024-12-06 13:20:16.845798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:26:11.269 13:20:17 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.269 13:20:17 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.269 13:20:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.269 13:20:17 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:11.269 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:11.269 ' 00:26:13.172 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:26:13.172 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:26:13.172 13:20:19 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:26:13.172 13:20:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.172 13:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:13.172 13:20:19 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:26:13.172 13:20:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.172 13:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:13.172 13:20:19 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:26:13.172 ' 00:26:14.104 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:26:14.362 13:20:20 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:26:14.362 13:20:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.362 13:20:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.362 13:20:20 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:26:14.362 13:20:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.362 13:20:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.362 13:20:20 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:26:14.362 13:20:20 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:26:14.929 13:20:21 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:26:14.930 13:20:21 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:26:14.930 13:20:21 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:26:14.930 13:20:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.930 13:20:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.930 13:20:21 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:26:14.930 13:20:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.930 13:20:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:14.930 13:20:21 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:26:14.930 ' 00:26:15.863 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:26:16.120 13:20:22 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:26:16.120 13:20:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.120 13:20:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:16.120 13:20:22 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:26:16.120 13:20:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.120 13:20:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:16.120 13:20:22 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:26:16.120 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:26:16.120 ' 00:26:17.496 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:26:17.496 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:26:17.496 13:20:24 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:17.754 13:20:24 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90516 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90516 ']' 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90516 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90516 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90516' 00:26:17.754 killing process with pid 90516 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90516 00:26:17.754 13:20:24 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90516 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90516 ']' 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90516 00:26:20.337 13:20:26 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90516 ']' 00:26:20.337 13:20:26 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90516 00:26:20.337 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90516) - No such process 00:26:20.337 Process with pid 90516 is not found 00:26:20.337 13:20:26 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90516 is not found' 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:20.337 13:20:26 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:20.337 ************************************ 00:26:20.337 END TEST spdkcli_raid 00:26:20.337 ************************************ 00:26:20.337 00:26:20.337 real 0m10.157s 00:26:20.337 user 0m21.015s 00:26:20.337 sys 0m1.119s 00:26:20.337 13:20:26 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.337 13:20:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 13:20:26 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:20.337 13:20:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:20.337 13:20:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.337 13:20:26 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 ************************************ 00:26:20.337 START TEST blockdev_raid5f 00:26:20.337 ************************************ 00:26:20.337 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:26:20.337 * Looking for test storage... 00:26:20.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:20.337 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:20.337 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:26:20.337 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:20.337 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:20.337 13:20:26 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.337 13:20:26 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.337 13:20:26 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.337 13:20:26 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.337 13:20:26 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.338 13:20:26 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:20.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.338 --rc genhtml_branch_coverage=1 00:26:20.338 --rc genhtml_function_coverage=1 00:26:20.338 --rc genhtml_legend=1 00:26:20.338 --rc geninfo_all_blocks=1 00:26:20.338 --rc geninfo_unexecuted_blocks=1 00:26:20.338 00:26:20.338 ' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:20.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.338 --rc genhtml_branch_coverage=1 00:26:20.338 --rc genhtml_function_coverage=1 00:26:20.338 --rc genhtml_legend=1 00:26:20.338 --rc geninfo_all_blocks=1 00:26:20.338 --rc geninfo_unexecuted_blocks=1 00:26:20.338 00:26:20.338 ' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:20.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.338 --rc genhtml_branch_coverage=1 00:26:20.338 --rc genhtml_function_coverage=1 00:26:20.338 --rc genhtml_legend=1 00:26:20.338 --rc geninfo_all_blocks=1 00:26:20.338 --rc geninfo_unexecuted_blocks=1 00:26:20.338 00:26:20.338 ' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:20.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.338 --rc genhtml_branch_coverage=1 00:26:20.338 --rc genhtml_function_coverage=1 00:26:20.338 --rc genhtml_legend=1 00:26:20.338 --rc geninfo_all_blocks=1 00:26:20.338 --rc geninfo_unexecuted_blocks=1 00:26:20.338 00:26:20.338 ' 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90792 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90792 00:26:20.338 13:20:26 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90792 ']' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.338 13:20:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:20.338 [2024-12-06 13:20:26.739623] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:20.338 [2024-12-06 13:20:26.739954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90792 ] 00:26:20.596 [2024-12-06 13:20:26.921706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.596 [2024-12-06 13:20:27.081882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.531 13:20:27 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.531 13:20:27 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:26:21.531 13:20:27 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:26:21.531 13:20:27 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:26:21.531 13:20:27 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:26:21.531 13:20:27 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.531 13:20:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.531 Malloc0 00:26:21.531 Malloc1 00:26:21.791 Malloc2 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6412413a-8c41-4488-96f0-9fdefcfb1eda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6412413a-8c41-4488-96f0-9fdefcfb1eda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6412413a-8c41-4488-96f0-9fdefcfb1eda",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8cef0c30-2595-4cb1-ab29-9f108567df68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ed22d6de-a40a-47a7-9c95-a3fc294b0655",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "624e8c71-c2e7-42c6-9759-f85328f0164d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:26:21.791 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90792 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90792 ']' 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90792 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90792 00:26:21.791 killing process with pid 90792 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90792' 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90792 00:26:21.791 13:20:28 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90792 00:26:24.327 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:24.327 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:24.327 13:20:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:24.327 13:20:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.327 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:24.327 ************************************ 00:26:24.327 START TEST bdev_hello_world 00:26:24.327 ************************************ 00:26:24.327 13:20:30 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:26:24.586 [2024-12-06 13:20:30.946990] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:24.586 [2024-12-06 13:20:30.947395] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90864 ] 00:26:24.844 [2024-12-06 13:20:31.144405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.844 [2024-12-06 13:20:31.306711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.412 [2024-12-06 13:20:31.902794] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:25.412 [2024-12-06 13:20:31.902858] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:26:25.412 [2024-12-06 13:20:31.902884] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:25.412 [2024-12-06 13:20:31.903476] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:25.412 [2024-12-06 13:20:31.903659] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:25.412 [2024-12-06 13:20:31.903685] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:25.412 [2024-12-06 13:20:31.903754] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:25.412 00:26:25.412 [2024-12-06 13:20:31.903781] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:26.788 00:26:26.788 real 0m2.408s 00:26:26.788 user 0m1.943s 00:26:26.788 sys 0m0.340s 00:26:26.788 13:20:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.788 13:20:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:26.788 ************************************ 00:26:26.788 END TEST bdev_hello_world 00:26:26.788 ************************************ 00:26:26.788 13:20:33 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:26:26.788 13:20:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.788 13:20:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.788 13:20:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:26.788 ************************************ 00:26:26.788 START TEST bdev_bounds 00:26:26.788 ************************************ 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:26:26.788 Process bdevio pid: 90902 00:26:26.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90902 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90902' 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90902 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90902 ']' 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.788 13:20:33 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:27.047 [2024-12-06 13:20:33.398756] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:27.047 [2024-12-06 13:20:33.399106] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90902 ] 00:26:27.305 [2024-12-06 13:20:33.584025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.305 [2024-12-06 13:20:33.720499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.305 [2024-12-06 13:20:33.720627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.305 [2024-12-06 13:20:33.720637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.241 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.241 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:26:28.241 13:20:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:28.241 I/O targets: 00:26:28.241 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:26:28.241 00:26:28.241 00:26:28.241 CUnit - A unit testing framework for C - Version 2.1-3 00:26:28.241 http://cunit.sourceforge.net/ 00:26:28.241 00:26:28.241 00:26:28.241 Suite: bdevio tests on: raid5f 00:26:28.241 Test: blockdev write read block ...passed 00:26:28.241 Test: blockdev write zeroes read block ...passed 00:26:28.241 Test: blockdev write zeroes read no split ...passed 00:26:28.241 Test: blockdev write zeroes read split ...passed 00:26:28.501 Test: blockdev write zeroes read split partial ...passed 00:26:28.501 Test: blockdev reset ...passed 00:26:28.501 Test: blockdev write read 8 blocks ...passed 00:26:28.501 Test: blockdev write read size > 128k ...passed 00:26:28.501 Test: blockdev write read invalid size ...passed 00:26:28.501 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:28.501 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:28.501 Test: blockdev write read max offset ...passed 00:26:28.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:28.501 Test: blockdev writev readv 8 blocks ...passed 00:26:28.501 Test: blockdev writev readv 30 x 1block ...passed 00:26:28.501 Test: blockdev writev readv block ...passed 00:26:28.501 Test: blockdev writev readv size > 128k ...passed 00:26:28.501 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:28.501 Test: blockdev comparev and writev ...passed 00:26:28.501 Test: blockdev nvme passthru rw ...passed 00:26:28.501 Test: blockdev nvme passthru vendor specific ...passed 00:26:28.501 Test: blockdev nvme admin passthru ...passed 00:26:28.501 Test: blockdev copy ...passed 00:26:28.501 00:26:28.501 Run Summary: Type Total Ran Passed Failed Inactive 00:26:28.501 suites 1 1 n/a 0 0 00:26:28.501 tests 23 23 23 0 0 00:26:28.501 asserts 130 130 130 0 n/a 00:26:28.501 00:26:28.501 Elapsed time = 0.629 seconds 00:26:28.501 0 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90902 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90902 ']' 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90902 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90902 00:26:28.501 killing process with pid 90902 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90902' 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90902 00:26:28.501 13:20:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90902 00:26:29.874 ************************************ 00:26:29.875 END TEST bdev_bounds 00:26:29.875 ************************************ 00:26:29.875 13:20:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:29.875 00:26:29.875 real 0m2.894s 00:26:29.875 user 0m7.245s 00:26:29.875 sys 0m0.424s 00:26:29.875 13:20:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.875 13:20:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 13:20:36 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:29.875 13:20:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:29.875 13:20:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.875 13:20:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 ************************************ 00:26:29.875 START TEST bdev_nbd 00:26:29.875 ************************************ 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90967 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90967 /var/tmp/spdk-nbd.sock 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90967 ']' 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:29.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.875 13:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:29.875 [2024-12-06 13:20:36.345561] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:29.875 [2024-12-06 13:20:36.345987] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.133 [2024-12-06 13:20:36.526071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.390 [2024-12-06 13:20:36.678656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:30.957 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:31.216 1+0 records in 00:26:31.216 1+0 records out 00:26:31.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445091 s, 9.2 MB/s 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:26:31.216 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:31.475 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:31.475 { 00:26:31.475 "nbd_device": "/dev/nbd0", 00:26:31.475 "bdev_name": "raid5f" 00:26:31.475 } 00:26:31.475 ]' 00:26:31.475 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:31.475 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:31.475 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:31.475 { 00:26:31.475 "nbd_device": "/dev/nbd0", 00:26:31.475 "bdev_name": "raid5f" 00:26:31.475 } 00:26:31.475 ]' 00:26:31.734 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:31.734 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.734 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:31.734 13:20:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.734 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.993 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:31.993 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.993 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:31.993 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:31.993 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:32.254 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:26:32.513 /dev/nbd0 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:32.513 1+0 records in 00:26:32.513 1+0 records out 00:26:32.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405415 s, 10.1 MB/s 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:32.513 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:32.771 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:32.771 { 00:26:32.771 "nbd_device": "/dev/nbd0", 00:26:32.771 "bdev_name": "raid5f" 00:26:32.771 } 00:26:32.771 ]' 00:26:32.771 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:32.771 { 00:26:32.771 "nbd_device": "/dev/nbd0", 00:26:32.771 "bdev_name": "raid5f" 00:26:32.771 } 00:26:32.771 ]' 00:26:32.771 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:26:33.029 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:33.030 256+0 records in 00:26:33.030 256+0 records out 00:26:33.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643574 s, 163 MB/s 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:33.030 256+0 records in 00:26:33.030 256+0 records out 00:26:33.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0396046 s, 26.5 MB/s 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:33.030 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:33.288 13:20:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:33.546 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:33.546 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:33.546 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:33.805 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:34.063 malloc_lvol_verify 00:26:34.063 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:34.321 a4f76f87-b329-44fc-9ada-bdb789f00134 00:26:34.321 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:34.580 744af91d-551c-4682-a24a-9e2ce35596ae 00:26:34.580 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:34.839 /dev/nbd0 00:26:34.839 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:34.839 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:34.839 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:34.839 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:34.839 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:34.839 mke2fs 1.47.0 (5-Feb-2023) 00:26:35.098 Discarding device blocks: 0/4096 done 00:26:35.098 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:35.098 00:26:35.098 Allocating group tables: 0/1 done 00:26:35.098 Writing inode tables: 0/1 done 00:26:35.098 Creating journal (1024 blocks): done 00:26:35.098 Writing superblocks and filesystem accounting information: 0/1 done 00:26:35.098 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:35.098 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90967 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90967 ']' 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90967 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90967 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.358 killing process with pid 90967 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90967' 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90967 00:26:35.358 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90967 00:26:36.734 13:20:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:36.734 00:26:36.734 real 0m6.851s 00:26:36.734 user 0m10.026s 00:26:36.734 sys 0m1.392s 00:26:36.734 13:20:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.734 13:20:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:36.734 ************************************ 00:26:36.734 END TEST bdev_nbd 00:26:36.734 ************************************ 00:26:36.734 13:20:43 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:26:36.734 13:20:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:26:36.734 13:20:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:26:36.734 13:20:43 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:26:36.734 13:20:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:36.734 13:20:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.734 13:20:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:36.734 ************************************ 00:26:36.734 START TEST bdev_fio 00:26:36.734 ************************************ 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:36.734 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:36.734 13:20:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:36.735 ************************************ 00:26:36.735 START TEST bdev_fio_rw_verify 00:26:36.735 ************************************ 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:26:36.735 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:36.993 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:36.993 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:36.994 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:26:36.994 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:36.994 13:20:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:37.252 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:37.252 fio-3.35 00:26:37.252 Starting 1 thread 00:26:49.461 00:26:49.461 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91177: Fri Dec 6 13:20:54 2024 00:26:49.461 read: IOPS=7372, BW=28.8MiB/s (30.2MB/s)(288MiB/10001msec) 00:26:49.461 slat (usec): min=25, max=144, avg=34.53, stdev= 9.53 00:26:49.461 clat (usec): min=17, max=578, avg=215.82, stdev=85.04 00:26:49.461 lat (usec): min=50, max=642, avg=250.34, stdev=86.50 00:26:49.461 clat percentiles (usec): 00:26:49.461 | 50.000th=[ 215], 99.000th=[ 392], 99.900th=[ 437], 99.990th=[ 519], 00:26:49.461 | 99.999th=[ 578] 00:26:49.461 write: IOPS=7708, BW=30.1MiB/s (31.6MB/s)(298MiB/9883msec); 0 zone resets 00:26:49.461 slat (usec): min=12, max=281, avg=26.84, stdev= 9.26 00:26:49.461 clat (usec): min=83, max=1671, avg=498.90, stdev=75.17 00:26:49.461 lat (usec): min=105, max=1695, avg=525.74, stdev=77.46 00:26:49.461 clat percentiles (usec): 00:26:49.461 | 50.000th=[ 502], 99.000th=[ 668], 99.900th=[ 914], 99.990th=[ 1565], 00:26:49.461 | 99.999th=[ 1680] 00:26:49.461 bw ( KiB/s): min=27336, max=33816, per=99.51%, avg=30684.21, stdev=1827.45, samples=19 00:26:49.461 iops : min= 6834, max= 8454, avg=7671.05, stdev=456.86, samples=19 00:26:49.461 lat (usec) : 20=0.01%, 100=4.57%, 250=26.32%, 500=43.76%, 750=25.22% 00:26:49.461 lat (usec) : 1000=0.10% 00:26:49.461 lat (msec) : 2=0.03% 00:26:49.461 cpu : usr=98.22%, sys=0.75%, ctx=24, majf=0, minf=6570 00:26:49.461 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.461 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.461 issued rwts: total=73736,76183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.461 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:49.461 00:26:49.461 Run status group 0 (all jobs): 00:26:49.461 READ: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=288MiB (302MB), run=10001-10001msec 00:26:49.461 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=298MiB (312MB), run=9883-9883msec 00:26:49.719 ----------------------------------------------------- 00:26:49.719 Suppressions used: 00:26:49.719 count bytes template 00:26:49.719 1 7 /usr/src/fio/parse.c 00:26:49.719 162 15552 /usr/src/fio/iolog.c 00:26:49.719 1 8 libtcmalloc_minimal.so 00:26:49.719 1 904 libcrypto.so 00:26:49.719 ----------------------------------------------------- 00:26:49.719 00:26:49.719 00:26:49.719 real 0m12.967s 00:26:49.719 user 0m13.341s 00:26:49.719 sys 0m0.819s 00:26:49.719 13:20:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.719 13:20:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:49.719 ************************************ 00:26:49.719 END TEST bdev_fio_rw_verify 00:26:49.719 ************************************ 00:26:49.977 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6412413a-8c41-4488-96f0-9fdefcfb1eda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6412413a-8c41-4488-96f0-9fdefcfb1eda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6412413a-8c41-4488-96f0-9fdefcfb1eda",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8cef0c30-2595-4cb1-ab29-9f108567df68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ed22d6de-a40a-47a7-9c95-a3fc294b0655",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "624e8c71-c2e7-42c6-9759-f85328f0164d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:49.978 /home/vagrant/spdk_repo/spdk 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:49.978 00:26:49.978 real 0m13.185s 00:26:49.978 user 0m13.443s 00:26:49.978 sys 0m0.917s 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.978 13:20:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:49.978 ************************************ 00:26:49.978 END TEST bdev_fio 00:26:49.978 ************************************ 00:26:49.978 13:20:56 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:49.978 13:20:56 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:49.978 13:20:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:49.978 13:20:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.978 13:20:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:49.978 ************************************ 00:26:49.978 START TEST bdev_verify 00:26:49.978 ************************************ 00:26:49.978 13:20:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:49.978 [2024-12-06 13:20:56.501809] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:49.978 [2024-12-06 13:20:56.502038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91344 ] 00:26:50.236 [2024-12-06 13:20:56.696055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.494 [2024-12-06 13:20:56.838548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.494 [2024-12-06 13:20:56.838562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.060 Running I/O for 5 seconds... 00:26:52.927 11853.00 IOPS, 46.30 MiB/s [2024-12-06T13:21:00.833Z] 11467.50 IOPS, 44.79 MiB/s [2024-12-06T13:21:01.768Z] 11710.33 IOPS, 45.74 MiB/s [2024-12-06T13:21:02.706Z] 11779.25 IOPS, 46.01 MiB/s [2024-12-06T13:21:02.706Z] 11909.40 IOPS, 46.52 MiB/s 00:26:56.177 Latency(us) 00:26:56.177 [2024-12-06T13:21:02.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.177 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.177 Verification LBA range: start 0x0 length 0x2000 00:26:56.177 raid5f : 5.01 5842.91 22.82 0.00 0.00 33027.67 2055.45 30265.72 00:26:56.177 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:56.177 Verification LBA range: start 0x2000 length 0x2000 00:26:56.177 raid5f : 5.02 6061.31 23.68 0.00 0.00 31845.39 256.93 25141.99 00:26:56.177 [2024-12-06T13:21:02.706Z] =================================================================================================================== 00:26:56.177 [2024-12-06T13:21:02.706Z] Total : 11904.22 46.50 0.00 0.00 32425.34 256.93 30265.72 00:26:57.556 00:26:57.556 real 0m7.471s 00:26:57.556 user 0m13.621s 00:26:57.556 sys 0m0.349s 00:26:57.556 13:21:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.556 13:21:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:57.556 ************************************ 00:26:57.556 END TEST bdev_verify 00:26:57.556 ************************************ 00:26:57.556 13:21:03 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:57.556 13:21:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:57.556 13:21:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.556 13:21:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:57.556 ************************************ 00:26:57.556 START TEST bdev_verify_big_io 00:26:57.556 ************************************ 00:26:57.556 13:21:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:57.556 [2024-12-06 13:21:04.014544] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:26:57.556 [2024-12-06 13:21:04.014714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91437 ] 00:26:57.814 [2024-12-06 13:21:04.199594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.073 [2024-12-06 13:21:04.347676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.073 [2024-12-06 13:21:04.347676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.640 Running I/O for 5 seconds... 00:27:00.956 506.00 IOPS, 31.62 MiB/s [2024-12-06T13:21:08.422Z] 569.50 IOPS, 35.59 MiB/s [2024-12-06T13:21:09.356Z] 612.67 IOPS, 38.29 MiB/s [2024-12-06T13:21:10.293Z] 634.50 IOPS, 39.66 MiB/s [2024-12-06T13:21:10.552Z] 659.60 IOPS, 41.23 MiB/s 00:27:04.023 Latency(us) 00:27:04.023 [2024-12-06T13:21:10.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.023 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:04.023 Verification LBA range: start 0x0 length 0x200 00:27:04.023 raid5f : 5.36 331.19 20.70 0.00 0.00 9470773.10 234.59 436588.92 00:27:04.023 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:04.023 Verification LBA range: start 0x200 length 0x200 00:27:04.023 raid5f : 5.24 339.19 21.20 0.00 0.00 9489231.11 325.82 400365.38 00:27:04.023 [2024-12-06T13:21:10.552Z] =================================================================================================================== 00:27:04.023 [2024-12-06T13:21:10.552Z] Total : 670.38 41.90 0.00 0.00 9480007.30 234.59 436588.92 00:27:05.446 00:27:05.446 real 0m7.752s 00:27:05.446 user 0m14.207s 00:27:05.446 sys 0m0.368s 00:27:05.446 13:21:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.446 13:21:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:05.446 ************************************ 00:27:05.446 END TEST bdev_verify_big_io 00:27:05.446 ************************************ 00:27:05.446 13:21:11 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:05.446 13:21:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:05.446 13:21:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.446 13:21:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:05.446 ************************************ 00:27:05.446 START TEST bdev_write_zeroes 00:27:05.446 ************************************ 00:27:05.446 13:21:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:05.446 [2024-12-06 13:21:11.797089] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:05.446 [2024-12-06 13:21:11.797237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91535 ] 00:27:05.446 [2024-12-06 13:21:11.967225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.705 [2024-12-06 13:21:12.101100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.270 Running I/O for 1 seconds... 00:27:07.203 19551.00 IOPS, 76.37 MiB/s 00:27:07.203 Latency(us) 00:27:07.203 [2024-12-06T13:21:13.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.203 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:07.203 raid5f : 1.01 19533.47 76.30 0.00 0.00 6526.70 2100.13 8757.99 00:27:07.203 [2024-12-06T13:21:13.732Z] =================================================================================================================== 00:27:07.203 [2024-12-06T13:21:13.732Z] Total : 19533.47 76.30 0.00 0.00 6526.70 2100.13 8757.99 00:27:08.575 00:27:08.575 real 0m3.254s 00:27:08.575 user 0m2.840s 00:27:08.575 sys 0m0.285s 00:27:08.575 13:21:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.575 13:21:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:08.575 ************************************ 00:27:08.575 END TEST bdev_write_zeroes 00:27:08.575 ************************************ 00:27:08.575 13:21:15 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:08.575 13:21:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:08.575 13:21:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.575 13:21:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:08.575 ************************************ 00:27:08.575 START TEST bdev_json_nonenclosed 00:27:08.575 ************************************ 00:27:08.575 13:21:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:08.833 [2024-12-06 13:21:15.132090] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:08.833 [2024-12-06 13:21:15.132316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91589 ] 00:27:08.833 [2024-12-06 13:21:15.321968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.092 [2024-12-06 13:21:15.455835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.092 [2024-12-06 13:21:15.455956] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:09.092 [2024-12-06 13:21:15.456000] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:09.092 [2024-12-06 13:21:15.456016] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:09.350 00:27:09.350 real 0m0.706s 00:27:09.350 user 0m0.440s 00:27:09.350 sys 0m0.161s 00:27:09.350 13:21:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.350 ************************************ 00:27:09.350 END TEST bdev_json_nonenclosed 00:27:09.350 13:21:15 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:09.350 ************************************ 00:27:09.350 13:21:15 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:09.350 13:21:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:27:09.350 13:21:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.350 13:21:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:09.350 ************************************ 00:27:09.350 START TEST bdev_json_nonarray 00:27:09.350 ************************************ 00:27:09.350 13:21:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:09.607 [2024-12-06 13:21:15.888787] Starting SPDK v25.01-pre git sha1 cf089b398 / DPDK 24.03.0 initialization... 00:27:09.607 [2024-12-06 13:21:15.889013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91613 ] 00:27:09.607 [2024-12-06 13:21:16.073149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.866 [2024-12-06 13:21:16.206529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.866 [2024-12-06 13:21:16.206686] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:09.866 [2024-12-06 13:21:16.206729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:09.866 [2024-12-06 13:21:16.206759] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.124 00:27:10.124 real 0m0.703s 00:27:10.124 user 0m0.441s 00:27:10.124 sys 0m0.156s 00:27:10.124 13:21:16 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.124 13:21:16 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:10.124 ************************************ 00:27:10.124 END TEST bdev_json_nonarray 00:27:10.124 ************************************ 00:27:10.124 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:27:10.124 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:27:10.124 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:27:10.125 13:21:16 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:27:10.125 00:27:10.125 real 0m50.140s 00:27:10.125 user 1m8.698s 00:27:10.125 sys 0m5.402s 00:27:10.125 13:21:16 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.125 13:21:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:27:10.125 ************************************ 00:27:10.125 END TEST blockdev_raid5f 00:27:10.125 ************************************ 00:27:10.125 13:21:16 -- spdk/autotest.sh@194 -- # uname -s 00:27:10.125 13:21:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:27:10.125 13:21:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.125 13:21:16 -- common/autotest_common.sh@10 -- # set +x 00:27:10.125 13:21:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:10.125 13:21:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:27:10.125 13:21:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:27:10.125 13:21:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:27:10.125 13:21:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.125 13:21:16 -- common/autotest_common.sh@10 -- # set +x 00:27:10.125 13:21:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:27:10.125 13:21:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:27:10.125 13:21:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:10.125 13:21:16 -- common/autotest_common.sh@10 -- # set +x 00:27:12.083 INFO: APP EXITING 00:27:12.083 INFO: killing all VMs 00:27:12.084 INFO: killing vhost app 00:27:12.084 INFO: EXIT DONE 00:27:12.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.084 Waiting for block devices as requested 00:27:12.084 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:12.341 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:12.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.907 Cleaning 00:27:12.907 Removing: /var/run/dpdk/spdk0/config 00:27:12.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:12.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:12.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:12.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:12.907 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:12.907 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:12.907 Removing: /dev/shm/spdk_tgt_trace.pid56956 00:27:12.907 Removing: /var/run/dpdk/spdk0 00:27:12.907 Removing: /var/run/dpdk/spdk_pid56715 00:27:13.165 Removing: /var/run/dpdk/spdk_pid56956 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57190 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57299 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57350 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57484 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57502 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57712 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57829 00:27:13.165 Removing: /var/run/dpdk/spdk_pid57942 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58069 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58177 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58222 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58259 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58329 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58446 00:27:13.165 Removing: /var/run/dpdk/spdk_pid58930 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59005 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59079 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59106 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59264 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59281 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59446 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59467 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59537 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59560 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59630 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59648 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59853 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59885 00:27:13.165 Removing: /var/run/dpdk/spdk_pid59974 00:27:13.165 Removing: /var/run/dpdk/spdk_pid61367 00:27:13.165 Removing: /var/run/dpdk/spdk_pid61584 00:27:13.165 Removing: /var/run/dpdk/spdk_pid61735 00:27:13.165 Removing: /var/run/dpdk/spdk_pid62395 00:27:13.165 Removing: /var/run/dpdk/spdk_pid62622 00:27:13.165 Removing: /var/run/dpdk/spdk_pid62763 00:27:13.165 Removing: /var/run/dpdk/spdk_pid63423 00:27:13.165 Removing: /var/run/dpdk/spdk_pid63764 00:27:13.165 Removing: /var/run/dpdk/spdk_pid63910 00:27:13.165 Removing: /var/run/dpdk/spdk_pid65327 00:27:13.165 Removing: /var/run/dpdk/spdk_pid65581 00:27:13.165 Removing: /var/run/dpdk/spdk_pid65732 00:27:13.165 Removing: /var/run/dpdk/spdk_pid67152 00:27:13.165 Removing: /var/run/dpdk/spdk_pid67416 00:27:13.165 Removing: /var/run/dpdk/spdk_pid67562 00:27:13.165 Removing: /var/run/dpdk/spdk_pid68976 00:27:13.165 Removing: /var/run/dpdk/spdk_pid69440 00:27:13.165 Removing: /var/run/dpdk/spdk_pid69587 00:27:13.165 Removing: /var/run/dpdk/spdk_pid71102 00:27:13.165 Removing: /var/run/dpdk/spdk_pid71372 00:27:13.165 Removing: /var/run/dpdk/spdk_pid71523 00:27:13.165 Removing: /var/run/dpdk/spdk_pid73041 00:27:13.165 Removing: /var/run/dpdk/spdk_pid73307 00:27:13.165 Removing: /var/run/dpdk/spdk_pid73457 00:27:13.165 Removing: /var/run/dpdk/spdk_pid74978 00:27:13.165 Removing: /var/run/dpdk/spdk_pid75473 00:27:13.165 Removing: /var/run/dpdk/spdk_pid75619 00:27:13.165 Removing: /var/run/dpdk/spdk_pid75768 00:27:13.165 Removing: /var/run/dpdk/spdk_pid76215 00:27:13.165 Removing: /var/run/dpdk/spdk_pid76994 00:27:13.165 Removing: /var/run/dpdk/spdk_pid77398 00:27:13.165 Removing: /var/run/dpdk/spdk_pid78100 00:27:13.165 Removing: /var/run/dpdk/spdk_pid78586 00:27:13.166 Removing: /var/run/dpdk/spdk_pid79391 00:27:13.166 Removing: /var/run/dpdk/spdk_pid79839 00:27:13.166 Removing: /var/run/dpdk/spdk_pid81834 00:27:13.166 Removing: /var/run/dpdk/spdk_pid82288 00:27:13.166 Removing: /var/run/dpdk/spdk_pid82734 00:27:13.166 Removing: /var/run/dpdk/spdk_pid84861 00:27:13.166 Removing: /var/run/dpdk/spdk_pid85352 00:27:13.166 Removing: /var/run/dpdk/spdk_pid85861 00:27:13.166 Removing: /var/run/dpdk/spdk_pid86939 00:27:13.166 Removing: /var/run/dpdk/spdk_pid87268 00:27:13.166 Removing: /var/run/dpdk/spdk_pid88229 00:27:13.166 Removing: /var/run/dpdk/spdk_pid88556 00:27:13.166 Removing: /var/run/dpdk/spdk_pid89512 00:27:13.166 Removing: /var/run/dpdk/spdk_pid89835 00:27:13.166 Removing: /var/run/dpdk/spdk_pid90516 00:27:13.166 Removing: /var/run/dpdk/spdk_pid90792 00:27:13.166 Removing: /var/run/dpdk/spdk_pid90864 00:27:13.166 Removing: /var/run/dpdk/spdk_pid90902 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91166 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91344 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91437 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91535 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91589 00:27:13.166 Removing: /var/run/dpdk/spdk_pid91613 00:27:13.424 Clean 00:27:13.424 13:21:19 -- common/autotest_common.sh@1453 -- # return 0 00:27:13.424 13:21:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:27:13.424 13:21:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.424 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:27:13.424 13:21:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:27:13.424 13:21:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.424 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:27:13.424 13:21:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:13.424 13:21:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:13.424 13:21:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:13.424 13:21:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:27:13.424 13:21:19 -- spdk/autotest.sh@398 -- # hostname 00:27:13.424 13:21:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:13.683 geninfo: WARNING: invalid characters removed from testname! 00:27:40.235 13:21:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:42.768 13:21:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.298 13:21:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.579 13:21:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.125 13:21:57 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:53.662 13:21:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:56.197 13:22:02 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:56.197 13:22:02 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:56.197 13:22:02 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:56.197 13:22:02 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:56.197 13:22:02 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:56.197 13:22:02 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:56.197 + [[ -n 5210 ]] 00:27:56.197 + sudo kill 5210 00:27:56.206 [Pipeline] } 00:27:56.222 [Pipeline] // timeout 00:27:56.227 [Pipeline] } 00:27:56.242 [Pipeline] // stage 00:27:56.248 [Pipeline] } 00:27:56.263 [Pipeline] // catchError 00:27:56.273 [Pipeline] stage 00:27:56.275 [Pipeline] { (Stop VM) 00:27:56.288 [Pipeline] sh 00:27:56.566 + vagrant halt 00:28:00.757 ==> default: Halting domain... 00:28:06.051 [Pipeline] sh 00:28:06.331 + vagrant destroy -f 00:28:10.520 ==> default: Removing domain... 00:28:10.535 [Pipeline] sh 00:28:10.818 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:28:10.828 [Pipeline] } 00:28:10.846 [Pipeline] // stage 00:28:10.853 [Pipeline] } 00:28:10.869 [Pipeline] // dir 00:28:10.876 [Pipeline] } 00:28:10.892 [Pipeline] // wrap 00:28:10.899 [Pipeline] } 00:28:10.913 [Pipeline] // catchError 00:28:10.924 [Pipeline] stage 00:28:10.926 [Pipeline] { (Epilogue) 00:28:10.939 [Pipeline] sh 00:28:11.221 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:16.506 [Pipeline] catchError 00:28:16.508 [Pipeline] { 00:28:16.522 [Pipeline] sh 00:28:16.805 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:16.805 Artifacts sizes are good 00:28:16.814 [Pipeline] } 00:28:16.829 [Pipeline] // catchError 00:28:16.853 [Pipeline] archiveArtifacts 00:28:16.860 Archiving artifacts 00:28:16.967 [Pipeline] cleanWs 00:28:16.981 [WS-CLEANUP] Deleting project workspace... 00:28:16.981 [WS-CLEANUP] Deferred wipeout is used... 00:28:16.986 [WS-CLEANUP] done 00:28:16.988 [Pipeline] } 00:28:17.005 [Pipeline] // stage 00:28:17.011 [Pipeline] } 00:28:17.024 [Pipeline] // node 00:28:17.030 [Pipeline] End of Pipeline 00:28:17.188 Finished: SUCCESS